00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 1031 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3698 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.052 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.053 The recommended git tool is: git 00:00:00.053 using credential 00000000-0000-0000-0000-000000000002 00:00:00.055 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.071 Fetching changes from the remote Git repository 00:00:00.077 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.109 Using shallow fetch with depth 1 00:00:00.109 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.109 > git --version # timeout=10 00:00:00.138 > git --version # 'git version 2.39.2' 00:00:00.138 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.165 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.165 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.084 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.096 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.107 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.107 > git config core.sparsecheckout # timeout=10 00:00:04.120 > git read-tree -mu HEAD # timeout=10 00:00:04.135 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.161 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.162 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.254 [Pipeline] Start of Pipeline 00:00:04.270 [Pipeline] library 00:00:04.272 Loading library shm_lib@master 00:00:04.272 Library shm_lib@master is cached. Copying from home. 00:00:04.288 [Pipeline] node 00:00:04.310 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:04.312 [Pipeline] { 00:00:04.323 [Pipeline] catchError 00:00:04.325 [Pipeline] { 00:00:04.337 [Pipeline] wrap 00:00:04.347 [Pipeline] { 00:00:04.356 [Pipeline] stage 00:00:04.359 [Pipeline] { (Prologue) 00:00:04.380 [Pipeline] echo 00:00:04.382 Node: VM-host-SM9 00:00:04.389 [Pipeline] cleanWs 00:00:04.398 [WS-CLEANUP] Deleting project workspace... 00:00:04.398 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.405 [WS-CLEANUP] done 00:00:04.688 [Pipeline] setCustomBuildProperty 00:00:04.776 [Pipeline] httpRequest 00:00:05.772 [Pipeline] echo 00:00:05.774 Sorcerer 10.211.164.20 is alive 00:00:05.783 [Pipeline] retry 00:00:05.784 [Pipeline] { 00:00:05.797 [Pipeline] httpRequest 00:00:05.803 HttpMethod: GET 00:00:05.803 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.803 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.805 Response Code: HTTP/1.1 200 OK 00:00:05.805 Success: Status code 200 is in the accepted range: 200,404 00:00:05.805 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.667 [Pipeline] } 00:00:06.682 [Pipeline] // retry 00:00:06.687 [Pipeline] sh 00:00:06.965 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.979 [Pipeline] httpRequest 00:00:07.597 [Pipeline] echo 00:00:07.598 Sorcerer 10.211.164.20 is alive 00:00:07.608 [Pipeline] retry 00:00:07.610 [Pipeline] { 00:00:07.624 [Pipeline] httpRequest 00:00:07.629 HttpMethod: GET 00:00:07.629 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:07.630 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:07.640 Response Code: HTTP/1.1 200 OK 00:00:07.641 Success: Status code 200 is in the accepted range: 200,404 00:00:07.641 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:08.299 [Pipeline] } 00:01:08.317 [Pipeline] // retry 00:01:08.325 [Pipeline] sh 00:01:08.605 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:11.151 [Pipeline] sh 00:01:11.432 + git -C spdk log --oneline -n5 00:01:11.432 c13c99a5e test: Various fixes for Fedora40 00:01:11.432 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:11.432 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:11.432 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:11.432 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:11.453 [Pipeline] withCredentials 00:01:11.464 > git --version # timeout=10 00:01:11.477 > git --version # 'git version 2.39.2' 00:01:11.493 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:11.496 [Pipeline] { 00:01:11.505 [Pipeline] retry 00:01:11.506 [Pipeline] { 00:01:11.521 [Pipeline] sh 00:01:11.802 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:11.827 [Pipeline] } 00:01:11.848 [Pipeline] // retry 00:01:11.856 [Pipeline] } 00:01:11.903 [Pipeline] // withCredentials 00:01:11.913 [Pipeline] httpRequest 00:01:12.279 [Pipeline] echo 00:01:12.281 Sorcerer 10.211.164.20 is alive 00:01:12.291 [Pipeline] retry 00:01:12.293 [Pipeline] { 00:01:12.307 [Pipeline] httpRequest 00:01:12.312 HttpMethod: GET 00:01:12.312 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:12.313 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:12.323 Response Code: HTTP/1.1 200 OK 00:01:12.323 Success: Status code 200 is in the accepted range: 200,404 00:01:12.324 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:22.534 [Pipeline] } 00:01:22.557 [Pipeline] // retry 00:01:22.566 [Pipeline] sh 00:01:22.850 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:24.238 [Pipeline] sh 00:01:24.517 + git -C dpdk log --oneline -n5 00:01:24.518 caf0f5d395 version: 22.11.4 00:01:24.518 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:24.518 dc9c799c7d vhost: fix missing spinlock unlock 00:01:24.518 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:24.518 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:24.536 [Pipeline] writeFile 00:01:24.551 [Pipeline] sh 00:01:24.832 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:24.843 [Pipeline] sh 00:01:25.122 + cat autorun-spdk.conf 00:01:25.123 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.123 SPDK_TEST_NVMF=1 00:01:25.123 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.123 SPDK_TEST_URING=1 00:01:25.123 SPDK_TEST_USDT=1 00:01:25.123 SPDK_RUN_UBSAN=1 00:01:25.123 NET_TYPE=virt 00:01:25.123 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:25.123 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:25.123 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.130 RUN_NIGHTLY=1 00:01:25.132 [Pipeline] } 00:01:25.146 [Pipeline] // stage 00:01:25.161 [Pipeline] stage 00:01:25.164 [Pipeline] { (Run VM) 00:01:25.177 [Pipeline] sh 00:01:25.458 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:25.458 + echo 'Start stage prepare_nvme.sh' 00:01:25.458 Start stage prepare_nvme.sh 00:01:25.458 + [[ -n 0 ]] 00:01:25.458 + disk_prefix=ex0 00:01:25.458 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:25.458 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:25.458 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:25.458 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.458 ++ SPDK_TEST_NVMF=1 00:01:25.458 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.458 ++ SPDK_TEST_URING=1 00:01:25.458 ++ SPDK_TEST_USDT=1 00:01:25.458 ++ SPDK_RUN_UBSAN=1 00:01:25.458 ++ NET_TYPE=virt 00:01:25.458 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:25.458 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:25.458 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.458 ++ RUN_NIGHTLY=1 00:01:25.458 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:25.458 + nvme_files=() 00:01:25.458 + declare -A nvme_files 00:01:25.458 + backend_dir=/var/lib/libvirt/images/backends 00:01:25.458 + nvme_files['nvme.img']=5G 00:01:25.458 + nvme_files['nvme-cmb.img']=5G 00:01:25.458 + nvme_files['nvme-multi0.img']=4G 00:01:25.458 + nvme_files['nvme-multi1.img']=4G 00:01:25.458 + nvme_files['nvme-multi2.img']=4G 00:01:25.458 + nvme_files['nvme-openstack.img']=8G 00:01:25.458 + nvme_files['nvme-zns.img']=5G 00:01:25.458 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:25.458 + (( SPDK_TEST_FTL == 1 )) 00:01:25.458 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:25.458 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:25.458 + for nvme in "${!nvme_files[@]}" 00:01:25.458 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:25.458 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.458 + for nvme in "${!nvme_files[@]}" 00:01:25.458 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:25.458 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.458 + for nvme in "${!nvme_files[@]}" 00:01:25.458 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:25.718 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:25.718 + for nvme in "${!nvme_files[@]}" 00:01:25.718 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:25.718 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.718 + for nvme in "${!nvme_files[@]}" 00:01:25.718 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:25.718 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.718 + for nvme in "${!nvme_files[@]}" 00:01:25.718 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:25.718 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.718 + for nvme in "${!nvme_files[@]}" 00:01:25.718 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:25.977 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.977 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:25.977 + echo 'End stage prepare_nvme.sh' 00:01:25.977 End stage prepare_nvme.sh 00:01:25.989 [Pipeline] sh 00:01:26.270 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:26.270 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:26.529 00:01:26.529 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:26.529 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:26.529 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:26.529 HELP=0 00:01:26.529 DRY_RUN=0 00:01:26.529 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:26.529 NVME_DISKS_TYPE=nvme,nvme, 00:01:26.529 NVME_AUTO_CREATE=0 00:01:26.529 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:26.529 NVME_CMB=,, 00:01:26.529 NVME_PMR=,, 00:01:26.529 NVME_ZNS=,, 00:01:26.529 NVME_MS=,, 00:01:26.529 NVME_FDP=,, 00:01:26.529 SPDK_VAGRANT_DISTRO=fedora39 00:01:26.529 SPDK_VAGRANT_VMCPU=10 00:01:26.529 SPDK_VAGRANT_VMRAM=12288 00:01:26.529 SPDK_VAGRANT_PROVIDER=libvirt 00:01:26.529 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:26.529 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:26.529 SPDK_OPENSTACK_NETWORK=0 00:01:26.529 VAGRANT_PACKAGE_BOX=0 00:01:26.529 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:26.529 FORCE_DISTRO=true 00:01:26.529 VAGRANT_BOX_VERSION= 00:01:26.529 EXTRA_VAGRANTFILES= 00:01:26.529 NIC_MODEL=e1000 00:01:26.529 00:01:26.529 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:26.529 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:29.062 Bringing machine 'default' up with 'libvirt' provider... 00:01:29.999 ==> default: Creating image (snapshot of base box volume). 00:01:29.999 ==> default: Creating domain with the following settings... 00:01:29.999 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733379985_4cefc559c344275c9e98 00:01:29.999 ==> default: -- Domain type: kvm 00:01:29.999 ==> default: -- Cpus: 10 00:01:29.999 ==> default: -- Feature: acpi 00:01:29.999 ==> default: -- Feature: apic 00:01:29.999 ==> default: -- Feature: pae 00:01:29.999 ==> default: -- Memory: 12288M 00:01:29.999 ==> default: -- Memory Backing: hugepages: 00:01:29.999 ==> default: -- Management MAC: 00:01:29.999 ==> default: -- Loader: 00:01:29.999 ==> default: -- Nvram: 00:01:29.999 ==> default: -- Base box: spdk/fedora39 00:01:29.999 ==> default: -- Storage pool: default 00:01:29.999 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733379985_4cefc559c344275c9e98.img (20G) 00:01:29.999 ==> default: -- Volume Cache: default 00:01:29.999 ==> default: -- Kernel: 00:01:29.999 ==> default: -- Initrd: 00:01:29.999 ==> default: -- Graphics Type: vnc 00:01:29.999 ==> default: -- Graphics Port: -1 00:01:29.999 ==> default: -- Graphics IP: 127.0.0.1 00:01:29.999 ==> default: -- Graphics Password: Not defined 00:01:29.999 ==> default: -- Video Type: cirrus 00:01:29.999 ==> default: -- Video VRAM: 9216 00:01:29.999 ==> default: -- Sound Type: 00:01:29.999 ==> default: -- Keymap: en-us 00:01:29.999 ==> default: -- TPM Path: 00:01:29.999 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:29.999 ==> default: -- Command line args: 00:01:29.999 ==> default: -> value=-device, 00:01:29.999 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:29.999 ==> default: -> value=-drive, 00:01:29.999 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:29.999 ==> default: -> value=-device, 00:01:29.999 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.999 ==> default: -> value=-device, 00:01:29.999 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:29.999 ==> default: -> value=-drive, 00:01:29.999 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:29.999 ==> default: -> value=-device, 00:01:29.999 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.999 ==> default: -> value=-drive, 00:01:29.999 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:29.999 ==> default: -> value=-device, 00:01:29.999 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.999 ==> default: -> value=-drive, 00:01:29.999 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:29.999 ==> default: -> value=-device, 00:01:29.999 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.999 ==> default: Creating shared folders metadata... 00:01:29.999 ==> default: Starting domain. 00:01:31.378 ==> default: Waiting for domain to get an IP address... 00:01:49.464 ==> default: Waiting for SSH to become available... 00:01:49.465 ==> default: Configuring and enabling network interfaces... 00:01:52.005 default: SSH address: 192.168.121.122:22 00:01:52.005 default: SSH username: vagrant 00:01:52.005 default: SSH auth method: private key 00:01:54.541 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:01.126 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:06.400 ==> default: Mounting SSHFS shared folder... 00:02:08.305 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:08.305 ==> default: Checking Mount.. 00:02:09.731 ==> default: Folder Successfully Mounted! 00:02:09.731 ==> default: Running provisioner: file... 00:02:10.298 default: ~/.gitconfig => .gitconfig 00:02:10.865 00:02:10.865 SUCCESS! 00:02:10.865 00:02:10.865 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:10.865 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:10.865 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:10.865 00:02:10.874 [Pipeline] } 00:02:10.889 [Pipeline] // stage 00:02:10.898 [Pipeline] dir 00:02:10.899 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:10.901 [Pipeline] { 00:02:10.914 [Pipeline] catchError 00:02:10.916 [Pipeline] { 00:02:10.928 [Pipeline] sh 00:02:11.207 + vagrant ssh-config --host vagrant 00:02:11.207 + sed -ne /^Host/,$p 00:02:11.207 + tee ssh_conf 00:02:14.500 Host vagrant 00:02:14.500 HostName 192.168.121.122 00:02:14.500 User vagrant 00:02:14.500 Port 22 00:02:14.500 UserKnownHostsFile /dev/null 00:02:14.500 StrictHostKeyChecking no 00:02:14.500 PasswordAuthentication no 00:02:14.500 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:14.500 IdentitiesOnly yes 00:02:14.500 LogLevel FATAL 00:02:14.500 ForwardAgent yes 00:02:14.500 ForwardX11 yes 00:02:14.500 00:02:14.515 [Pipeline] withEnv 00:02:14.518 [Pipeline] { 00:02:14.533 [Pipeline] sh 00:02:14.814 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:14.815 source /etc/os-release 00:02:14.815 [[ -e /image.version ]] && img=$(< /image.version) 00:02:14.815 # Minimal, systemd-like check. 00:02:14.815 if [[ -e /.dockerenv ]]; then 00:02:14.815 # Clear garbage from the node's name: 00:02:14.815 # agt-er_autotest_547-896 -> autotest_547-896 00:02:14.815 # $HOSTNAME is the actual container id 00:02:14.815 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:14.815 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:14.815 # We can assume this is a mount from a host where container is running, 00:02:14.815 # so fetch its hostname to easily identify the target swarm worker. 00:02:14.815 container="$(< /etc/hostname) ($agent)" 00:02:14.815 else 00:02:14.815 # Fallback 00:02:14.815 container=$agent 00:02:14.815 fi 00:02:14.815 fi 00:02:14.815 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:14.815 00:02:15.086 [Pipeline] } 00:02:15.103 [Pipeline] // withEnv 00:02:15.112 [Pipeline] setCustomBuildProperty 00:02:15.127 [Pipeline] stage 00:02:15.130 [Pipeline] { (Tests) 00:02:15.147 [Pipeline] sh 00:02:15.430 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:15.705 [Pipeline] sh 00:02:15.986 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:16.263 [Pipeline] timeout 00:02:16.263 Timeout set to expire in 1 hr 0 min 00:02:16.265 [Pipeline] { 00:02:16.283 [Pipeline] sh 00:02:16.635 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:17.203 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:17.216 [Pipeline] sh 00:02:17.497 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:17.771 [Pipeline] sh 00:02:18.053 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:18.330 [Pipeline] sh 00:02:18.610 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:18.870 ++ readlink -f spdk_repo 00:02:18.870 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:18.870 + [[ -n /home/vagrant/spdk_repo ]] 00:02:18.870 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:18.870 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:18.870 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:18.870 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:18.870 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:18.870 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:18.870 + cd /home/vagrant/spdk_repo 00:02:18.870 + source /etc/os-release 00:02:18.870 ++ NAME='Fedora Linux' 00:02:18.870 ++ VERSION='39 (Cloud Edition)' 00:02:18.870 ++ ID=fedora 00:02:18.870 ++ VERSION_ID=39 00:02:18.870 ++ VERSION_CODENAME= 00:02:18.870 ++ PLATFORM_ID=platform:f39 00:02:18.870 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:18.870 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:18.870 ++ LOGO=fedora-logo-icon 00:02:18.870 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:18.870 ++ HOME_URL=https://fedoraproject.org/ 00:02:18.870 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:18.870 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:18.870 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:18.870 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:18.870 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:18.870 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:18.870 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:18.870 ++ SUPPORT_END=2024-11-12 00:02:18.870 ++ VARIANT='Cloud Edition' 00:02:18.870 ++ VARIANT_ID=cloud 00:02:18.870 + uname -a 00:02:18.870 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:18.870 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:18.870 Hugepages 00:02:18.870 node hugesize free / total 00:02:18.870 node0 1048576kB 0 / 0 00:02:18.870 node0 2048kB 0 / 0 00:02:18.870 00:02:18.870 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:18.870 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:18.870 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:18.870 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:18.870 + rm -f /tmp/spdk-ld-path 00:02:18.870 + source autorun-spdk.conf 00:02:18.870 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.870 ++ SPDK_TEST_NVMF=1 00:02:18.870 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:18.870 ++ SPDK_TEST_URING=1 00:02:18.870 ++ SPDK_TEST_USDT=1 00:02:18.870 ++ SPDK_RUN_UBSAN=1 00:02:18.870 ++ NET_TYPE=virt 00:02:18.870 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:18.870 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:18.870 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:18.870 ++ RUN_NIGHTLY=1 00:02:18.870 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:18.870 + [[ -n '' ]] 00:02:18.870 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:19.129 + for M in /var/spdk/build-*-manifest.txt 00:02:19.129 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:19.129 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.129 + for M in /var/spdk/build-*-manifest.txt 00:02:19.129 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:19.129 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.129 + for M in /var/spdk/build-*-manifest.txt 00:02:19.129 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:19.129 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:19.129 ++ uname 00:02:19.129 + [[ Linux == \L\i\n\u\x ]] 00:02:19.129 + sudo dmesg -T 00:02:19.129 + sudo dmesg --clear 00:02:19.129 + dmesg_pid=5977 00:02:19.129 + [[ Fedora Linux == FreeBSD ]] 00:02:19.129 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.129 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:19.129 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:19.129 + sudo dmesg -Tw 00:02:19.129 + [[ -x /usr/src/fio-static/fio ]] 00:02:19.129 + export FIO_BIN=/usr/src/fio-static/fio 00:02:19.129 + FIO_BIN=/usr/src/fio-static/fio 00:02:19.129 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:19.129 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:19.129 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:19.129 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.129 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:19.129 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:19.129 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.130 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:19.130 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:19.130 Test configuration: 00:02:19.130 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:19.130 SPDK_TEST_NVMF=1 00:02:19.130 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:19.130 SPDK_TEST_URING=1 00:02:19.130 SPDK_TEST_USDT=1 00:02:19.130 SPDK_RUN_UBSAN=1 00:02:19.130 NET_TYPE=virt 00:02:19.130 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:19.130 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:19.130 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:19.130 RUN_NIGHTLY=1 06:27:14 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:19.130 06:27:14 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:19.130 06:27:14 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:19.130 06:27:14 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:19.130 06:27:14 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:19.130 06:27:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.130 06:27:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.130 06:27:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.130 06:27:14 -- paths/export.sh@5 -- $ export PATH 00:02:19.130 06:27:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.130 06:27:14 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:19.130 06:27:14 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:19.130 06:27:14 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733380034.XXXXXX 00:02:19.130 06:27:14 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733380034.8xgDG9 00:02:19.130 06:27:14 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:19.130 06:27:14 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:02:19.130 06:27:14 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:19.130 06:27:14 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:19.130 06:27:14 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:19.130 06:27:14 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:19.130 06:27:14 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:19.130 06:27:14 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:19.130 06:27:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.130 06:27:14 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:19.130 06:27:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:19.130 06:27:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:19.130 06:27:14 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:19.130 06:27:14 -- spdk/autobuild.sh@16 -- $ date -u 00:02:19.130 Thu Dec 5 06:27:14 AM UTC 2024 00:02:19.130 06:27:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:19.130 LTS-67-gc13c99a5e 00:02:19.130 06:27:14 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:19.130 06:27:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:19.130 06:27:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:19.130 06:27:14 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:19.130 06:27:14 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:19.130 06:27:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.390 ************************************ 00:02:19.390 START TEST ubsan 00:02:19.390 ************************************ 00:02:19.390 using ubsan 00:02:19.390 06:27:14 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:19.390 00:02:19.390 real 0m0.000s 00:02:19.390 user 0m0.000s 00:02:19.390 sys 0m0.000s 00:02:19.390 06:27:14 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:19.390 ************************************ 00:02:19.390 END TEST ubsan 00:02:19.390 ************************************ 00:02:19.390 06:27:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.390 06:27:14 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:19.390 06:27:14 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:19.390 06:27:14 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:19.390 06:27:14 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:19.390 06:27:14 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:19.390 06:27:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.390 ************************************ 00:02:19.390 START TEST build_native_dpdk 00:02:19.390 ************************************ 00:02:19.390 06:27:14 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:19.390 06:27:14 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:19.390 06:27:14 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:19.390 06:27:14 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:19.390 06:27:14 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:19.390 06:27:14 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:19.390 06:27:14 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:19.390 06:27:14 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:19.390 06:27:14 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:19.390 06:27:14 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:19.390 06:27:14 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:19.390 06:27:14 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:19.390 06:27:14 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:19.390 06:27:14 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:19.390 06:27:14 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:19.390 06:27:14 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:19.390 06:27:14 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:19.390 06:27:14 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:19.390 06:27:14 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:19.390 06:27:14 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:19.390 06:27:14 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:19.390 caf0f5d395 version: 22.11.4 00:02:19.390 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:19.390 dc9c799c7d vhost: fix missing spinlock unlock 00:02:19.390 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:19.390 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:19.390 06:27:14 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:19.390 06:27:14 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:19.390 06:27:14 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:19.390 06:27:14 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:19.390 06:27:14 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:19.390 06:27:14 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:19.390 06:27:14 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:19.390 06:27:14 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:19.390 06:27:14 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:19.390 06:27:14 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:19.390 06:27:14 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:19.390 06:27:14 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:19.390 06:27:14 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:19.390 06:27:14 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:19.390 06:27:14 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:19.390 06:27:14 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:19.390 06:27:14 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:19.390 06:27:14 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:19.390 06:27:14 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:19.390 06:27:14 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:19.390 06:27:14 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:19.390 06:27:14 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:19.390 06:27:14 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:19.390 06:27:14 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:19.390 06:27:14 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:19.390 06:27:14 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:19.390 06:27:14 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:19.390 06:27:14 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:19.390 06:27:14 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:19.390 06:27:14 -- scripts/common.sh@343 -- $ case "$op" in 00:02:19.390 06:27:14 -- scripts/common.sh@344 -- $ : 1 00:02:19.390 06:27:14 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:19.390 06:27:14 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:19.390 06:27:14 -- scripts/common.sh@364 -- $ decimal 22 00:02:19.390 06:27:14 -- scripts/common.sh@352 -- $ local d=22 00:02:19.390 06:27:14 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:19.390 06:27:14 -- scripts/common.sh@354 -- $ echo 22 00:02:19.390 06:27:14 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:19.390 06:27:14 -- scripts/common.sh@365 -- $ decimal 21 00:02:19.390 06:27:14 -- scripts/common.sh@352 -- $ local d=21 00:02:19.390 06:27:14 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:19.390 06:27:14 -- scripts/common.sh@354 -- $ echo 21 00:02:19.390 06:27:14 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:19.390 06:27:14 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:19.390 06:27:14 -- scripts/common.sh@366 -- $ return 1 00:02:19.390 06:27:14 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:19.390 patching file config/rte_config.h 00:02:19.390 Hunk #1 succeeded at 60 (offset 1 line). 00:02:19.390 06:27:14 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:19.390 06:27:14 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:19.390 06:27:14 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:19.390 06:27:14 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:19.390 06:27:14 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:19.390 06:27:14 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:19.390 06:27:14 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:19.390 06:27:14 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:19.390 06:27:14 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:19.390 06:27:14 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:19.390 06:27:14 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:19.390 06:27:14 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:19.390 06:27:14 -- scripts/common.sh@343 -- $ case "$op" in 00:02:19.390 06:27:14 -- scripts/common.sh@344 -- $ : 1 00:02:19.390 06:27:14 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:19.390 06:27:14 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:19.390 06:27:14 -- scripts/common.sh@364 -- $ decimal 22 00:02:19.390 06:27:14 -- scripts/common.sh@352 -- $ local d=22 00:02:19.390 06:27:14 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:19.390 06:27:14 -- scripts/common.sh@354 -- $ echo 22 00:02:19.390 06:27:14 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:19.390 06:27:14 -- scripts/common.sh@365 -- $ decimal 24 00:02:19.390 06:27:14 -- scripts/common.sh@352 -- $ local d=24 00:02:19.390 06:27:14 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:19.390 06:27:14 -- scripts/common.sh@354 -- $ echo 24 00:02:19.390 06:27:14 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:19.390 06:27:14 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:19.390 06:27:14 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:19.390 06:27:14 -- scripts/common.sh@367 -- $ return 0 00:02:19.390 06:27:14 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:19.391 patching file lib/pcapng/rte_pcapng.c 00:02:19.391 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:19.391 06:27:14 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:19.391 06:27:14 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:19.391 06:27:14 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:19.391 06:27:14 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:19.391 06:27:14 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:24.652 The Meson build system 00:02:24.652 Version: 1.5.0 00:02:24.652 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:24.652 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:24.652 Build type: native build 00:02:24.652 Program cat found: YES (/usr/bin/cat) 00:02:24.652 Project name: DPDK 00:02:24.652 Project version: 22.11.4 00:02:24.652 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:24.652 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:24.652 Host machine cpu family: x86_64 00:02:24.652 Host machine cpu: x86_64 00:02:24.652 Message: ## Building in Developer Mode ## 00:02:24.652 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:24.652 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:24.652 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:24.652 Program objdump found: YES (/usr/bin/objdump) 00:02:24.652 Program python3 found: YES (/usr/bin/python3) 00:02:24.652 Program cat found: YES (/usr/bin/cat) 00:02:24.652 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:24.652 Checking for size of "void *" : 8 00:02:24.652 Checking for size of "void *" : 8 (cached) 00:02:24.652 Library m found: YES 00:02:24.652 Library numa found: YES 00:02:24.652 Has header "numaif.h" : YES 00:02:24.652 Library fdt found: NO 00:02:24.652 Library execinfo found: NO 00:02:24.652 Has header "execinfo.h" : YES 00:02:24.652 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:24.652 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:24.652 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:24.652 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:24.652 Run-time dependency openssl found: YES 3.1.1 00:02:24.652 Run-time dependency libpcap found: YES 1.10.4 00:02:24.652 Has header "pcap.h" with dependency libpcap: YES 00:02:24.652 Compiler for C supports arguments -Wcast-qual: YES 00:02:24.652 Compiler for C supports arguments -Wdeprecated: YES 00:02:24.652 Compiler for C supports arguments -Wformat: YES 00:02:24.652 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:24.652 Compiler for C supports arguments -Wformat-security: NO 00:02:24.652 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:24.652 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:24.652 Compiler for C supports arguments -Wnested-externs: YES 00:02:24.652 Compiler for C supports arguments -Wold-style-definition: YES 00:02:24.652 Compiler for C supports arguments -Wpointer-arith: YES 00:02:24.652 Compiler for C supports arguments -Wsign-compare: YES 00:02:24.652 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:24.652 Compiler for C supports arguments -Wundef: YES 00:02:24.652 Compiler for C supports arguments -Wwrite-strings: YES 00:02:24.652 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:24.652 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:24.652 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:24.652 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:24.652 Compiler for C supports arguments -mavx512f: YES 00:02:24.652 Checking if "AVX512 checking" compiles: YES 00:02:24.652 Fetching value of define "__SSE4_2__" : 1 00:02:24.652 Fetching value of define "__AES__" : 1 00:02:24.652 Fetching value of define "__AVX__" : 1 00:02:24.652 Fetching value of define "__AVX2__" : 1 00:02:24.652 Fetching value of define "__AVX512BW__" : (undefined) 00:02:24.652 Fetching value of define "__AVX512CD__" : (undefined) 00:02:24.652 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:24.652 Fetching value of define "__AVX512F__" : (undefined) 00:02:24.652 Fetching value of define "__AVX512VL__" : (undefined) 00:02:24.652 Fetching value of define "__PCLMUL__" : 1 00:02:24.652 Fetching value of define "__RDRND__" : 1 00:02:24.652 Fetching value of define "__RDSEED__" : 1 00:02:24.652 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:24.652 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:24.652 Message: lib/kvargs: Defining dependency "kvargs" 00:02:24.652 Message: lib/telemetry: Defining dependency "telemetry" 00:02:24.652 Checking for function "getentropy" : YES 00:02:24.652 Message: lib/eal: Defining dependency "eal" 00:02:24.652 Message: lib/ring: Defining dependency "ring" 00:02:24.652 Message: lib/rcu: Defining dependency "rcu" 00:02:24.652 Message: lib/mempool: Defining dependency "mempool" 00:02:24.652 Message: lib/mbuf: Defining dependency "mbuf" 00:02:24.652 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:24.652 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:24.652 Compiler for C supports arguments -mpclmul: YES 00:02:24.652 Compiler for C supports arguments -maes: YES 00:02:24.652 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:24.652 Compiler for C supports arguments -mavx512bw: YES 00:02:24.652 Compiler for C supports arguments -mavx512dq: YES 00:02:24.652 Compiler for C supports arguments -mavx512vl: YES 00:02:24.652 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:24.652 Compiler for C supports arguments -mavx2: YES 00:02:24.652 Compiler for C supports arguments -mavx: YES 00:02:24.652 Message: lib/net: Defining dependency "net" 00:02:24.652 Message: lib/meter: Defining dependency "meter" 00:02:24.652 Message: lib/ethdev: Defining dependency "ethdev" 00:02:24.652 Message: lib/pci: Defining dependency "pci" 00:02:24.652 Message: lib/cmdline: Defining dependency "cmdline" 00:02:24.653 Message: lib/metrics: Defining dependency "metrics" 00:02:24.653 Message: lib/hash: Defining dependency "hash" 00:02:24.653 Message: lib/timer: Defining dependency "timer" 00:02:24.653 Fetching value of define "__AVX2__" : 1 (cached) 00:02:24.653 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:24.653 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:24.653 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:24.653 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:24.653 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:24.653 Message: lib/acl: Defining dependency "acl" 00:02:24.653 Message: lib/bbdev: Defining dependency "bbdev" 00:02:24.653 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:24.653 Run-time dependency libelf found: YES 0.191 00:02:24.653 Message: lib/bpf: Defining dependency "bpf" 00:02:24.653 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:24.653 Message: lib/compressdev: Defining dependency "compressdev" 00:02:24.653 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:24.653 Message: lib/distributor: Defining dependency "distributor" 00:02:24.653 Message: lib/efd: Defining dependency "efd" 00:02:24.653 Message: lib/eventdev: Defining dependency "eventdev" 00:02:24.653 Message: lib/gpudev: Defining dependency "gpudev" 00:02:24.653 Message: lib/gro: Defining dependency "gro" 00:02:24.653 Message: lib/gso: Defining dependency "gso" 00:02:24.653 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:24.653 Message: lib/jobstats: Defining dependency "jobstats" 00:02:24.653 Message: lib/latencystats: Defining dependency "latencystats" 00:02:24.653 Message: lib/lpm: Defining dependency "lpm" 00:02:24.653 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:24.653 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:24.653 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:24.653 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:24.653 Message: lib/member: Defining dependency "member" 00:02:24.653 Message: lib/pcapng: Defining dependency "pcapng" 00:02:24.653 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:24.653 Message: lib/power: Defining dependency "power" 00:02:24.653 Message: lib/rawdev: Defining dependency "rawdev" 00:02:24.653 Message: lib/regexdev: Defining dependency "regexdev" 00:02:24.653 Message: lib/dmadev: Defining dependency "dmadev" 00:02:24.653 Message: lib/rib: Defining dependency "rib" 00:02:24.653 Message: lib/reorder: Defining dependency "reorder" 00:02:24.653 Message: lib/sched: Defining dependency "sched" 00:02:24.653 Message: lib/security: Defining dependency "security" 00:02:24.653 Message: lib/stack: Defining dependency "stack" 00:02:24.653 Has header "linux/userfaultfd.h" : YES 00:02:24.653 Message: lib/vhost: Defining dependency "vhost" 00:02:24.653 Message: lib/ipsec: Defining dependency "ipsec" 00:02:24.653 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:24.653 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:24.653 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:24.653 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:24.653 Message: lib/fib: Defining dependency "fib" 00:02:24.653 Message: lib/port: Defining dependency "port" 00:02:24.653 Message: lib/pdump: Defining dependency "pdump" 00:02:24.653 Message: lib/table: Defining dependency "table" 00:02:24.653 Message: lib/pipeline: Defining dependency "pipeline" 00:02:24.653 Message: lib/graph: Defining dependency "graph" 00:02:24.653 Message: lib/node: Defining dependency "node" 00:02:24.653 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:24.653 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:24.653 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:24.653 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:24.653 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:24.653 Compiler for C supports arguments -Wno-unused-value: YES 00:02:24.653 Compiler for C supports arguments -Wno-format: YES 00:02:24.653 Compiler for C supports arguments -Wno-format-security: YES 00:02:24.653 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:26.663 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:26.663 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:26.663 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:26.663 Fetching value of define "__AVX2__" : 1 (cached) 00:02:26.663 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:26.663 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:26.663 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:26.663 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:26.663 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:26.663 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:26.663 Configuring doxy-api.conf using configuration 00:02:26.663 Program sphinx-build found: NO 00:02:26.663 Configuring rte_build_config.h using configuration 00:02:26.663 Message: 00:02:26.663 ================= 00:02:26.663 Applications Enabled 00:02:26.663 ================= 00:02:26.663 00:02:26.663 apps: 00:02:26.663 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:26.663 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:26.663 test-security-perf, 00:02:26.663 00:02:26.663 Message: 00:02:26.663 ================= 00:02:26.663 Libraries Enabled 00:02:26.663 ================= 00:02:26.663 00:02:26.663 libs: 00:02:26.663 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:26.663 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:26.663 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:26.663 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:26.663 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:26.663 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:26.663 table, pipeline, graph, node, 00:02:26.663 00:02:26.663 Message: 00:02:26.663 =============== 00:02:26.663 Drivers Enabled 00:02:26.663 =============== 00:02:26.663 00:02:26.663 common: 00:02:26.663 00:02:26.663 bus: 00:02:26.663 pci, vdev, 00:02:26.663 mempool: 00:02:26.663 ring, 00:02:26.663 dma: 00:02:26.663 00:02:26.663 net: 00:02:26.663 i40e, 00:02:26.663 raw: 00:02:26.663 00:02:26.664 crypto: 00:02:26.664 00:02:26.664 compress: 00:02:26.664 00:02:26.664 regex: 00:02:26.664 00:02:26.664 vdpa: 00:02:26.664 00:02:26.664 event: 00:02:26.664 00:02:26.664 baseband: 00:02:26.664 00:02:26.664 gpu: 00:02:26.664 00:02:26.664 00:02:26.664 Message: 00:02:26.664 ================= 00:02:26.664 Content Skipped 00:02:26.664 ================= 00:02:26.664 00:02:26.664 apps: 00:02:26.664 00:02:26.664 libs: 00:02:26.664 kni: explicitly disabled via build config (deprecated lib) 00:02:26.664 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:26.664 00:02:26.664 drivers: 00:02:26.664 common/cpt: not in enabled drivers build config 00:02:26.664 common/dpaax: not in enabled drivers build config 00:02:26.664 common/iavf: not in enabled drivers build config 00:02:26.664 common/idpf: not in enabled drivers build config 00:02:26.664 common/mvep: not in enabled drivers build config 00:02:26.664 common/octeontx: not in enabled drivers build config 00:02:26.664 bus/auxiliary: not in enabled drivers build config 00:02:26.664 bus/dpaa: not in enabled drivers build config 00:02:26.664 bus/fslmc: not in enabled drivers build config 00:02:26.664 bus/ifpga: not in enabled drivers build config 00:02:26.664 bus/vmbus: not in enabled drivers build config 00:02:26.664 common/cnxk: not in enabled drivers build config 00:02:26.664 common/mlx5: not in enabled drivers build config 00:02:26.664 common/qat: not in enabled drivers build config 00:02:26.664 common/sfc_efx: not in enabled drivers build config 00:02:26.664 mempool/bucket: not in enabled drivers build config 00:02:26.664 mempool/cnxk: not in enabled drivers build config 00:02:26.664 mempool/dpaa: not in enabled drivers build config 00:02:26.664 mempool/dpaa2: not in enabled drivers build config 00:02:26.664 mempool/octeontx: not in enabled drivers build config 00:02:26.664 mempool/stack: not in enabled drivers build config 00:02:26.664 dma/cnxk: not in enabled drivers build config 00:02:26.664 dma/dpaa: not in enabled drivers build config 00:02:26.664 dma/dpaa2: not in enabled drivers build config 00:02:26.664 dma/hisilicon: not in enabled drivers build config 00:02:26.664 dma/idxd: not in enabled drivers build config 00:02:26.664 dma/ioat: not in enabled drivers build config 00:02:26.664 dma/skeleton: not in enabled drivers build config 00:02:26.664 net/af_packet: not in enabled drivers build config 00:02:26.664 net/af_xdp: not in enabled drivers build config 00:02:26.664 net/ark: not in enabled drivers build config 00:02:26.664 net/atlantic: not in enabled drivers build config 00:02:26.664 net/avp: not in enabled drivers build config 00:02:26.664 net/axgbe: not in enabled drivers build config 00:02:26.664 net/bnx2x: not in enabled drivers build config 00:02:26.664 net/bnxt: not in enabled drivers build config 00:02:26.664 net/bonding: not in enabled drivers build config 00:02:26.664 net/cnxk: not in enabled drivers build config 00:02:26.664 net/cxgbe: not in enabled drivers build config 00:02:26.664 net/dpaa: not in enabled drivers build config 00:02:26.664 net/dpaa2: not in enabled drivers build config 00:02:26.664 net/e1000: not in enabled drivers build config 00:02:26.664 net/ena: not in enabled drivers build config 00:02:26.664 net/enetc: not in enabled drivers build config 00:02:26.664 net/enetfec: not in enabled drivers build config 00:02:26.664 net/enic: not in enabled drivers build config 00:02:26.664 net/failsafe: not in enabled drivers build config 00:02:26.664 net/fm10k: not in enabled drivers build config 00:02:26.664 net/gve: not in enabled drivers build config 00:02:26.664 net/hinic: not in enabled drivers build config 00:02:26.664 net/hns3: not in enabled drivers build config 00:02:26.664 net/iavf: not in enabled drivers build config 00:02:26.664 net/ice: not in enabled drivers build config 00:02:26.664 net/idpf: not in enabled drivers build config 00:02:26.664 net/igc: not in enabled drivers build config 00:02:26.664 net/ionic: not in enabled drivers build config 00:02:26.664 net/ipn3ke: not in enabled drivers build config 00:02:26.664 net/ixgbe: not in enabled drivers build config 00:02:26.664 net/kni: not in enabled drivers build config 00:02:26.664 net/liquidio: not in enabled drivers build config 00:02:26.664 net/mana: not in enabled drivers build config 00:02:26.664 net/memif: not in enabled drivers build config 00:02:26.664 net/mlx4: not in enabled drivers build config 00:02:26.664 net/mlx5: not in enabled drivers build config 00:02:26.664 net/mvneta: not in enabled drivers build config 00:02:26.664 net/mvpp2: not in enabled drivers build config 00:02:26.664 net/netvsc: not in enabled drivers build config 00:02:26.664 net/nfb: not in enabled drivers build config 00:02:26.664 net/nfp: not in enabled drivers build config 00:02:26.664 net/ngbe: not in enabled drivers build config 00:02:26.664 net/null: not in enabled drivers build config 00:02:26.664 net/octeontx: not in enabled drivers build config 00:02:26.664 net/octeon_ep: not in enabled drivers build config 00:02:26.664 net/pcap: not in enabled drivers build config 00:02:26.664 net/pfe: not in enabled drivers build config 00:02:26.664 net/qede: not in enabled drivers build config 00:02:26.664 net/ring: not in enabled drivers build config 00:02:26.664 net/sfc: not in enabled drivers build config 00:02:26.664 net/softnic: not in enabled drivers build config 00:02:26.664 net/tap: not in enabled drivers build config 00:02:26.664 net/thunderx: not in enabled drivers build config 00:02:26.664 net/txgbe: not in enabled drivers build config 00:02:26.664 net/vdev_netvsc: not in enabled drivers build config 00:02:26.664 net/vhost: not in enabled drivers build config 00:02:26.664 net/virtio: not in enabled drivers build config 00:02:26.664 net/vmxnet3: not in enabled drivers build config 00:02:26.664 raw/cnxk_bphy: not in enabled drivers build config 00:02:26.664 raw/cnxk_gpio: not in enabled drivers build config 00:02:26.664 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:26.664 raw/ifpga: not in enabled drivers build config 00:02:26.664 raw/ntb: not in enabled drivers build config 00:02:26.664 raw/skeleton: not in enabled drivers build config 00:02:26.664 crypto/armv8: not in enabled drivers build config 00:02:26.664 crypto/bcmfs: not in enabled drivers build config 00:02:26.664 crypto/caam_jr: not in enabled drivers build config 00:02:26.664 crypto/ccp: not in enabled drivers build config 00:02:26.664 crypto/cnxk: not in enabled drivers build config 00:02:26.664 crypto/dpaa_sec: not in enabled drivers build config 00:02:26.664 crypto/dpaa2_sec: not in enabled drivers build config 00:02:26.664 crypto/ipsec_mb: not in enabled drivers build config 00:02:26.664 crypto/mlx5: not in enabled drivers build config 00:02:26.664 crypto/mvsam: not in enabled drivers build config 00:02:26.664 crypto/nitrox: not in enabled drivers build config 00:02:26.664 crypto/null: not in enabled drivers build config 00:02:26.664 crypto/octeontx: not in enabled drivers build config 00:02:26.664 crypto/openssl: not in enabled drivers build config 00:02:26.664 crypto/scheduler: not in enabled drivers build config 00:02:26.664 crypto/uadk: not in enabled drivers build config 00:02:26.664 crypto/virtio: not in enabled drivers build config 00:02:26.664 compress/isal: not in enabled drivers build config 00:02:26.664 compress/mlx5: not in enabled drivers build config 00:02:26.664 compress/octeontx: not in enabled drivers build config 00:02:26.664 compress/zlib: not in enabled drivers build config 00:02:26.664 regex/mlx5: not in enabled drivers build config 00:02:26.664 regex/cn9k: not in enabled drivers build config 00:02:26.664 vdpa/ifc: not in enabled drivers build config 00:02:26.664 vdpa/mlx5: not in enabled drivers build config 00:02:26.664 vdpa/sfc: not in enabled drivers build config 00:02:26.664 event/cnxk: not in enabled drivers build config 00:02:26.664 event/dlb2: not in enabled drivers build config 00:02:26.664 event/dpaa: not in enabled drivers build config 00:02:26.664 event/dpaa2: not in enabled drivers build config 00:02:26.664 event/dsw: not in enabled drivers build config 00:02:26.664 event/opdl: not in enabled drivers build config 00:02:26.664 event/skeleton: not in enabled drivers build config 00:02:26.664 event/sw: not in enabled drivers build config 00:02:26.664 event/octeontx: not in enabled drivers build config 00:02:26.664 baseband/acc: not in enabled drivers build config 00:02:26.664 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:26.664 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:26.664 baseband/la12xx: not in enabled drivers build config 00:02:26.664 baseband/null: not in enabled drivers build config 00:02:26.664 baseband/turbo_sw: not in enabled drivers build config 00:02:26.664 gpu/cuda: not in enabled drivers build config 00:02:26.664 00:02:26.664 00:02:26.664 Build targets in project: 314 00:02:26.664 00:02:26.664 DPDK 22.11.4 00:02:26.664 00:02:26.664 User defined options 00:02:26.664 libdir : lib 00:02:26.664 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:26.664 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:26.664 c_link_args : 00:02:26.664 enable_docs : false 00:02:26.664 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:26.664 enable_kmods : false 00:02:26.664 machine : native 00:02:26.664 tests : false 00:02:26.664 00:02:26.664 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:26.664 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:26.664 06:27:21 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:26.664 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:26.664 [1/743] Generating lib/rte_kvargs_def with a custom command 00:02:26.664 [2/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:26.664 [3/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:26.664 [4/743] Generating lib/rte_telemetry_def with a custom command 00:02:26.664 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:26.664 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:26.664 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:26.664 [8/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:26.664 [9/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:26.664 [10/743] Linking static target lib/librte_kvargs.a 00:02:26.665 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:26.665 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:26.665 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:26.665 [14/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:26.665 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:26.924 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:26.924 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:26.924 [18/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.924 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:26.924 [20/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:26.924 [21/743] Linking target lib/librte_kvargs.so.23.0 00:02:26.924 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:26.924 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:26.924 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:26.924 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:26.924 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:27.183 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:27.183 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:27.183 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:27.183 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:27.183 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:27.183 [32/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:27.183 [33/743] Linking static target lib/librte_telemetry.a 00:02:27.183 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:27.183 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:27.183 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:27.183 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:27.441 [38/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:27.441 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:27.441 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:27.441 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:27.441 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:27.441 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:27.700 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:27.700 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:27.700 [46/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.700 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:27.700 [48/743] Linking target lib/librte_telemetry.so.23.0 00:02:27.700 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:27.700 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:27.700 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:27.700 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:27.700 [53/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:27.700 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:27.700 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:27.700 [56/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:27.700 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:27.958 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:27.958 [59/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:27.958 [60/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:27.958 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:27.958 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:27.958 [63/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:27.958 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:27.958 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:27.958 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:27.958 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:27.958 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:27.958 [69/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:27.958 [70/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:28.217 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:28.217 [72/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:28.217 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:28.217 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:28.217 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:28.217 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:28.217 [77/743] Generating lib/rte_eal_mingw with a custom command 00:02:28.217 [78/743] Generating lib/rte_eal_def with a custom command 00:02:28.217 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:28.217 [80/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:28.217 [81/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:28.217 [82/743] Generating lib/rte_ring_def with a custom command 00:02:28.217 [83/743] Generating lib/rte_ring_mingw with a custom command 00:02:28.217 [84/743] Generating lib/rte_rcu_def with a custom command 00:02:28.217 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:02:28.217 [86/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:28.475 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:28.475 [88/743] Linking static target lib/librte_ring.a 00:02:28.475 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:28.475 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:28.475 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:28.475 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:28.475 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:28.733 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.733 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:28.733 [96/743] Linking static target lib/librte_eal.a 00:02:28.991 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:28.991 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:28.991 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:28.991 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:28.991 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:29.250 [102/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:29.250 [103/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:29.250 [104/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:29.250 [105/743] Linking static target lib/librte_rcu.a 00:02:29.250 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:29.250 [107/743] Linking static target lib/librte_mempool.a 00:02:29.508 [108/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:29.508 [109/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:29.508 [110/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:29.508 [111/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.508 [112/743] Generating lib/rte_net_def with a custom command 00:02:29.508 [113/743] Generating lib/rte_net_mingw with a custom command 00:02:29.508 [114/743] Generating lib/rte_meter_def with a custom command 00:02:29.508 [115/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:29.767 [116/743] Generating lib/rte_meter_mingw with a custom command 00:02:29.767 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:29.767 [118/743] Linking static target lib/librte_meter.a 00:02:29.767 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:29.767 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:29.767 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:30.024 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.024 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:30.024 [124/743] Linking static target lib/librte_mbuf.a 00:02:30.024 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:30.024 [126/743] Linking static target lib/librte_net.a 00:02:30.024 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.281 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.281 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:30.538 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:30.538 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:30.538 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:30.538 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:30.538 [134/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.796 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:31.054 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:31.054 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:31.054 [138/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:31.312 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:31.312 [140/743] Generating lib/rte_pci_def with a custom command 00:02:31.312 [141/743] Generating lib/rte_pci_mingw with a custom command 00:02:31.312 [142/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:31.312 [143/743] Linking static target lib/librte_pci.a 00:02:31.312 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:31.312 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:31.312 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:31.312 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:31.312 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:31.570 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:31.570 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.570 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:31.570 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:31.570 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:31.570 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:31.570 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:31.570 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:31.570 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:31.570 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:31.570 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:31.570 [160/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:31.829 [161/743] Generating lib/rte_metrics_def with a custom command 00:02:31.829 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:31.829 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:31.829 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:31.829 [165/743] Generating lib/rte_hash_def with a custom command 00:02:31.829 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:31.829 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:31.829 [168/743] Generating lib/rte_timer_def with a custom command 00:02:31.829 [169/743] Generating lib/rte_timer_mingw with a custom command 00:02:31.829 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:32.087 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:32.087 [172/743] Linking static target lib/librte_cmdline.a 00:02:32.087 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:32.346 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:32.346 [175/743] Linking static target lib/librte_metrics.a 00:02:32.346 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:32.346 [177/743] Linking static target lib/librte_timer.a 00:02:32.605 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.605 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.864 [180/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:32.864 [181/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:32.864 [182/743] Linking static target lib/librte_ethdev.a 00:02:32.864 [183/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.864 [184/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:33.431 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:33.431 [186/743] Generating lib/rte_acl_def with a custom command 00:02:33.431 [187/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:33.431 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:33.431 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:33.431 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:33.690 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:33.690 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:33.690 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:33.948 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:34.205 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:34.205 [196/743] Linking static target lib/librte_bitratestats.a 00:02:34.205 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:34.462 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.462 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:34.462 [200/743] Linking static target lib/librte_bbdev.a 00:02:34.719 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:34.976 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:34.976 [203/743] Linking static target lib/librte_hash.a 00:02:34.976 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:34.976 [205/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:34.976 [206/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.976 [207/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:35.234 [208/743] Linking static target lib/acl/libavx512_tmp.a 00:02:35.234 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:35.492 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.492 [211/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:35.492 [212/743] Generating lib/rte_bpf_def with a custom command 00:02:35.750 [213/743] Generating lib/rte_bpf_mingw with a custom command 00:02:35.750 [214/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:35.750 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:02:35.750 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:35.750 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:35.750 [218/743] Linking static target lib/librte_acl.a 00:02:35.750 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:36.009 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:36.009 [221/743] Linking static target lib/librte_cfgfile.a 00:02:36.009 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:36.009 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:36.009 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:36.009 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.268 [226/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.268 [227/743] Linking target lib/librte_eal.so.23.0 00:02:36.268 [228/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.268 [229/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:36.268 [230/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:36.268 [231/743] Generating lib/rte_cryptodev_def with a custom command 00:02:36.268 [232/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:36.268 [233/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:36.527 [234/743] Linking target lib/librte_ring.so.23.0 00:02:36.527 [235/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:36.527 [236/743] Linking target lib/librte_meter.so.23.0 00:02:36.527 [237/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:36.527 [238/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:36.527 [239/743] Linking target lib/librte_pci.so.23.0 00:02:36.527 [240/743] Linking target lib/librte_rcu.so.23.0 00:02:36.527 [241/743] Linking target lib/librte_mempool.so.23.0 00:02:36.527 [242/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:36.527 [243/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:36.786 [244/743] Linking target lib/librte_timer.so.23.0 00:02:36.786 [245/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:36.786 [246/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:36.786 [247/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:36.786 [248/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:36.786 [249/743] Linking static target lib/librte_bpf.a 00:02:36.786 [250/743] Linking target lib/librte_acl.so.23.0 00:02:36.786 [251/743] Linking target lib/librte_mbuf.so.23.0 00:02:36.786 [252/743] Linking target lib/librte_cfgfile.so.23.0 00:02:36.786 [253/743] Linking static target lib/librte_compressdev.a 00:02:36.786 [254/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:36.786 [255/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:36.786 [256/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:37.045 [257/743] Generating lib/rte_distributor_def with a custom command 00:02:37.045 [258/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:37.045 [259/743] Linking target lib/librte_net.so.23.0 00:02:37.045 [260/743] Linking target lib/librte_bbdev.so.23.0 00:02:37.045 [261/743] Generating lib/rte_distributor_mingw with a custom command 00:02:37.045 [262/743] Generating lib/rte_efd_def with a custom command 00:02:37.045 [263/743] Generating lib/rte_efd_mingw with a custom command 00:02:37.045 [264/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.045 [265/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:37.045 [266/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:37.045 [267/743] Linking target lib/librte_cmdline.so.23.0 00:02:37.045 [268/743] Linking target lib/librte_hash.so.23.0 00:02:37.303 [269/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:37.304 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:37.304 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:37.304 [272/743] Linking static target lib/librte_distributor.a 00:02:37.562 [273/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.562 [274/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.562 [275/743] Linking target lib/librte_compressdev.so.23.0 00:02:37.562 [276/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.821 [277/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:37.821 [278/743] Linking target lib/librte_distributor.so.23.0 00:02:37.821 [279/743] Linking target lib/librte_ethdev.so.23.0 00:02:37.821 [280/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:37.821 [281/743] Generating lib/rte_eventdev_def with a custom command 00:02:37.821 [282/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:37.821 [283/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:37.821 [284/743] Linking target lib/librte_metrics.so.23.0 00:02:37.821 [285/743] Linking target lib/librte_bpf.so.23.0 00:02:38.078 [286/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:38.078 [287/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:38.078 [288/743] Linking target lib/librte_bitratestats.so.23.0 00:02:38.078 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:38.078 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:38.336 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:38.594 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:38.594 [293/743] Linking static target lib/librte_efd.a 00:02:38.594 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:38.852 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:38.852 [296/743] Linking static target lib/librte_cryptodev.a 00:02:38.852 [297/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.852 [298/743] Linking target lib/librte_efd.so.23.0 00:02:38.852 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:39.109 [300/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:39.109 [301/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:39.109 [302/743] Linking static target lib/librte_gpudev.a 00:02:39.110 [303/743] Generating lib/rte_gro_def with a custom command 00:02:39.110 [304/743] Generating lib/rte_gro_mingw with a custom command 00:02:39.110 [305/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:39.110 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:39.368 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:39.368 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:39.626 [309/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:39.626 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:39.626 [311/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:39.626 [312/743] Generating lib/rte_gso_def with a custom command 00:02:39.626 [313/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:39.882 [314/743] Generating lib/rte_gso_mingw with a custom command 00:02:39.882 [315/743] Linking static target lib/librte_gro.a 00:02:39.882 [316/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.882 [317/743] Linking target lib/librte_gpudev.so.23.0 00:02:39.882 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:39.882 [319/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.139 [320/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:40.139 [321/743] Linking target lib/librte_gro.so.23.0 00:02:40.139 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:40.139 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:40.139 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:40.139 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:40.139 [326/743] Linking static target lib/librte_eventdev.a 00:02:40.397 [327/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:40.397 [328/743] Linking static target lib/librte_jobstats.a 00:02:40.397 [329/743] Generating lib/rte_jobstats_def with a custom command 00:02:40.397 [330/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:40.397 [331/743] Linking static target lib/librte_gso.a 00:02:40.397 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:40.654 [333/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:40.654 [334/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.654 [335/743] Linking target lib/librte_gso.so.23.0 00:02:40.654 [336/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:40.654 [337/743] Generating lib/rte_latencystats_def with a custom command 00:02:40.654 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:40.654 [339/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:40.654 [340/743] Generating lib/rte_lpm_def with a custom command 00:02:40.654 [341/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.654 [342/743] Generating lib/rte_lpm_mingw with a custom command 00:02:40.655 [343/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:40.655 [344/743] Linking target lib/librte_jobstats.so.23.0 00:02:40.912 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:40.912 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:40.912 [347/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.912 [348/743] Linking static target lib/librte_ip_frag.a 00:02:40.912 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:02:41.169 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:41.169 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.169 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:41.425 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:41.425 [354/743] Linking static target lib/librte_latencystats.a 00:02:41.425 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:41.425 [356/743] Generating lib/rte_member_def with a custom command 00:02:41.425 [357/743] Generating lib/rte_member_mingw with a custom command 00:02:41.425 [358/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:41.425 [359/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:41.425 [360/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:41.425 [361/743] Generating lib/rte_pcapng_def with a custom command 00:02:41.425 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:41.425 [363/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:41.425 [364/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.683 [365/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:41.683 [366/743] Linking target lib/librte_latencystats.so.23.0 00:02:41.683 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:41.683 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:41.683 [369/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:41.683 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:41.941 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:41.941 [372/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:41.941 [373/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:41.941 [374/743] Linking static target lib/librte_lpm.a 00:02:42.199 [375/743] Generating lib/rte_power_def with a custom command 00:02:42.199 [376/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.199 [377/743] Generating lib/rte_power_mingw with a custom command 00:02:42.199 [378/743] Linking target lib/librte_eventdev.so.23.0 00:02:42.199 [379/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:42.199 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:42.199 [381/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:42.199 [382/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:42.199 [383/743] Generating lib/rte_regexdev_def with a custom command 00:02:42.486 [384/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:42.486 [385/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:42.486 [386/743] Generating lib/rte_dmadev_def with a custom command 00:02:42.487 [387/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:42.487 [388/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.487 [389/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:42.487 [390/743] Linking static target lib/librte_pcapng.a 00:02:42.487 [391/743] Linking target lib/librte_lpm.so.23.0 00:02:42.487 [392/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:42.487 [393/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:42.487 [394/743] Linking static target lib/librte_rawdev.a 00:02:42.487 [395/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:42.487 [396/743] Generating lib/rte_rib_def with a custom command 00:02:42.487 [397/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:42.487 [398/743] Generating lib/rte_rib_mingw with a custom command 00:02:42.759 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:42.759 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:42.759 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.759 [402/743] Linking target lib/librte_pcapng.so.23.0 00:02:42.759 [403/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:42.759 [404/743] Linking static target lib/librte_dmadev.a 00:02:42.759 [405/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:42.759 [406/743] Linking static target lib/librte_power.a 00:02:43.016 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:43.016 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.016 [409/743] Linking target lib/librte_rawdev.so.23.0 00:02:43.016 [410/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:43.016 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:43.016 [412/743] Linking static target lib/librte_regexdev.a 00:02:43.016 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:43.016 [414/743] Generating lib/rte_sched_def with a custom command 00:02:43.016 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:43.274 [416/743] Generating lib/rte_sched_mingw with a custom command 00:02:43.275 [417/743] Generating lib/rte_security_def with a custom command 00:02:43.275 [418/743] Generating lib/rte_security_mingw with a custom command 00:02:43.275 [419/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:43.275 [420/743] Linking static target lib/librte_member.a 00:02:43.275 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:43.275 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.275 [423/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:43.275 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:43.275 [425/743] Linking target lib/librte_dmadev.so.23.0 00:02:43.533 [426/743] Generating lib/rte_stack_def with a custom command 00:02:43.533 [427/743] Generating lib/rte_stack_mingw with a custom command 00:02:43.533 [428/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:43.533 [429/743] Linking static target lib/librte_reorder.a 00:02:43.533 [430/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:43.533 [431/743] Linking static target lib/librte_stack.a 00:02:43.533 [432/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:43.533 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.533 [434/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:43.533 [435/743] Linking target lib/librte_member.so.23.0 00:02:43.792 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.792 [437/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.792 [438/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:43.792 [439/743] Linking static target lib/librte_rib.a 00:02:43.792 [440/743] Linking target lib/librte_stack.so.23.0 00:02:43.792 [441/743] Linking target lib/librte_reorder.so.23.0 00:02:43.792 [442/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.792 [443/743] Linking target lib/librte_power.so.23.0 00:02:43.792 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.792 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:44.050 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:44.050 [447/743] Linking static target lib/librte_security.a 00:02:44.050 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.050 [449/743] Linking target lib/librte_rib.so.23.0 00:02:44.308 [450/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:44.308 [451/743] Generating lib/rte_vhost_def with a custom command 00:02:44.308 [452/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:44.308 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:02:44.308 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:44.567 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.567 [456/743] Linking target lib/librte_security.so.23.0 00:02:44.567 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:44.567 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:44.825 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:44.826 [460/743] Linking static target lib/librte_sched.a 00:02:45.084 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.342 [462/743] Linking target lib/librte_sched.so.23.0 00:02:45.342 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:45.342 [464/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:45.342 [465/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:45.342 [466/743] Generating lib/rte_ipsec_def with a custom command 00:02:45.342 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:45.342 [468/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:45.342 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:45.599 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:45.599 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:45.856 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:45.856 [473/743] Generating lib/rte_fib_def with a custom command 00:02:45.856 [474/743] Generating lib/rte_fib_mingw with a custom command 00:02:45.856 [475/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:45.856 [476/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:45.856 [477/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:45.856 [478/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:46.112 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:46.112 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:46.369 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:46.369 [482/743] Linking static target lib/librte_ipsec.a 00:02:46.627 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.627 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:46.627 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:46.885 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:46.885 [487/743] Linking static target lib/librte_fib.a 00:02:46.885 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:46.885 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:46.885 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:47.141 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:47.141 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.141 [493/743] Linking target lib/librte_fib.so.23.0 00:02:47.398 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:47.964 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:47.964 [496/743] Generating lib/rte_port_def with a custom command 00:02:47.964 [497/743] Generating lib/rte_port_mingw with a custom command 00:02:47.964 [498/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:47.964 [499/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:47.964 [500/743] Generating lib/rte_pdump_def with a custom command 00:02:47.964 [501/743] Generating lib/rte_pdump_mingw with a custom command 00:02:47.964 [502/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:47.964 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:48.222 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:48.222 [505/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:48.222 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:48.222 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:48.480 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:48.480 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:48.480 [510/743] Linking static target lib/librte_port.a 00:02:48.738 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:48.738 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:48.997 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.997 [514/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:48.997 [515/743] Linking target lib/librte_port.so.23.0 00:02:48.997 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:49.255 [517/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:49.255 [518/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:49.255 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:49.255 [520/743] Linking static target lib/librte_pdump.a 00:02:49.513 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.513 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:49.771 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:49.771 [524/743] Generating lib/rte_table_def with a custom command 00:02:49.771 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:49.771 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:49.771 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:50.028 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:50.028 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:50.028 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:50.286 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:50.286 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:50.286 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:50.286 [534/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:50.544 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:50.544 [536/743] Linking static target lib/librte_table.a 00:02:50.544 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:50.802 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:51.060 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:51.060 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.060 [541/743] Linking target lib/librte_table.so.23.0 00:02:51.060 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:51.060 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:51.318 [544/743] Generating lib/rte_graph_def with a custom command 00:02:51.318 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:51.318 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:51.318 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:51.576 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:51.834 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:51.834 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:51.834 [551/743] Linking static target lib/librte_graph.a 00:02:51.834 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:52.093 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:52.093 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:52.093 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:52.661 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:52.661 [557/743] Generating lib/rte_node_def with a custom command 00:02:52.661 [558/743] Generating lib/rte_node_mingw with a custom command 00:02:52.661 [559/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:52.661 [560/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.661 [561/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:52.661 [562/743] Linking target lib/librte_graph.so.23.0 00:02:52.661 [563/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:52.920 [564/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:52.920 [565/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:52.920 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:52.920 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:52.920 [568/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:52.920 [569/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:52.920 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:52.920 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:53.179 [572/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:53.179 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:53.179 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:53.179 [575/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:53.179 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:53.179 [577/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:53.179 [578/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:53.179 [579/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:53.179 [580/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:53.179 [581/743] Linking static target lib/librte_node.a 00:02:53.450 [582/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:53.450 [583/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.450 [584/743] Linking static target drivers/librte_bus_vdev.a 00:02:53.450 [585/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.450 [586/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:53.450 [587/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:53.450 [588/743] Linking target lib/librte_node.so.23.0 00:02:53.450 [589/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.734 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.734 [591/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:53.734 [592/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.734 [593/743] Linking static target drivers/librte_bus_pci.a 00:02:53.734 [594/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:53.734 [595/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.992 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:54.251 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.251 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:54.251 [599/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:54.251 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:54.251 [601/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:54.251 [602/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:54.509 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:54.509 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:54.509 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:54.509 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.509 [607/743] Linking static target drivers/librte_mempool_ring.a 00:02:54.509 [608/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.768 [609/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:54.768 [610/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:55.026 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:55.593 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:55.593 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:55.593 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:56.158 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:56.158 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:56.158 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:56.726 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:56.727 [619/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:56.727 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:56.987 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:56.987 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:56.987 [623/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:56.987 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:57.245 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:58.178 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:58.436 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:58.436 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:58.436 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:58.436 [630/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:58.695 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:58.695 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:58.695 [633/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:58.695 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:58.954 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:58.954 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:59.519 [637/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:59.519 [638/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:59.519 [639/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:59.777 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:59.777 [641/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:59.777 [642/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:59.777 [643/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:59.777 [644/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:59.777 [645/743] Linking static target drivers/librte_net_i40e.a 00:03:00.034 [646/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:00.034 [647/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:00.034 [648/743] Linking static target lib/librte_vhost.a 00:03:00.034 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:00.291 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:00.550 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:00.550 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:00.808 [653/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.808 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:00.808 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:00.808 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:01.066 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:01.324 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.583 [659/743] Linking target lib/librte_vhost.so.23.0 00:03:01.583 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:01.583 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:01.583 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:01.583 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:01.583 [664/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:01.841 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:01.841 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:01.841 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:02.100 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:02.100 [669/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:02.358 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:02.617 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:02.617 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:02.617 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:03.184 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:03.184 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:03.442 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:03.442 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:03.700 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:03.700 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:03.959 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:03.959 [681/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:03.959 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:04.218 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:04.218 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:04.476 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:04.476 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:04.476 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:04.735 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:04.735 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:04.994 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:04.994 [691/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:04.994 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:04.994 [693/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:04.994 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:05.560 [695/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:05.560 [696/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:05.560 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:05.818 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:06.076 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:06.335 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:06.593 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:06.593 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:06.851 [703/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:06.851 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:06.851 [705/743] Linking static target lib/librte_pipeline.a 00:03:06.851 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:06.851 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:07.113 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:07.425 [709/743] Linking target app/dpdk-dumpcap 00:03:07.425 [710/743] Linking target app/dpdk-pdump 00:03:07.425 [711/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:07.683 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:07.683 [713/743] Linking target app/dpdk-proc-info 00:03:07.683 [714/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:07.683 [715/743] Linking target app/dpdk-test-acl 00:03:07.941 [716/743] Linking target app/dpdk-test-bbdev 00:03:07.941 [717/743] Linking target app/dpdk-test-cmdline 00:03:08.199 [718/743] Linking target app/dpdk-test-compress-perf 00:03:08.199 [719/743] Linking target app/dpdk-test-crypto-perf 00:03:08.199 [720/743] Linking target app/dpdk-test-eventdev 00:03:08.199 [721/743] Linking target app/dpdk-test-fib 00:03:08.199 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:08.199 [723/743] Linking target app/dpdk-test-flow-perf 00:03:08.199 [724/743] Linking target app/dpdk-test-gpudev 00:03:08.456 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:08.456 [726/743] Linking target app/dpdk-test-pipeline 00:03:09.023 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:09.023 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:09.023 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:09.281 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:09.281 [731/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:09.281 [732/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:09.281 [733/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.540 [734/743] Linking target lib/librte_pipeline.so.23.0 00:03:09.540 [735/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:09.798 [736/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:09.798 [737/743] Linking target app/dpdk-test-sad 00:03:09.798 [738/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:10.055 [739/743] Linking target app/dpdk-test-regex 00:03:10.312 [740/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:10.312 [741/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:10.875 [742/743] Linking target app/dpdk-test-security-perf 00:03:10.876 [743/743] Linking target app/dpdk-testpmd 00:03:10.876 06:28:06 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:10.876 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:10.876 [0/1] Installing files. 00:03:11.443 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:11.443 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:11.443 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:11.443 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:11.443 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:11.443 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:11.443 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.444 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.445 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.446 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.447 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.448 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.448 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.448 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.448 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.448 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.448 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.448 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.448 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.448 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.448 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.448 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.448 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.448 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.448 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.448 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.448 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.709 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.709 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.709 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.709 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.709 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.709 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.709 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.709 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.710 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.711 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.712 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:11.971 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:11.971 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:11.971 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:11.971 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:11.971 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:11.971 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:11.971 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:11.971 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:11.972 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:11.972 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:11.972 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:11.972 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:11.972 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:11.972 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:11.972 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:11.972 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:11.972 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:11.972 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:11.972 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:11.972 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:11.972 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:11.972 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:11.972 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:11.972 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:11.972 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:11.972 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:11.972 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:11.972 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:11.972 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:11.972 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:11.972 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:11.972 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:11.972 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:11.972 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:11.972 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:11.972 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:11.972 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:11.972 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:11.972 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:11.972 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:11.972 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:11.972 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:11.972 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:11.972 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:11.972 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:11.972 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:11.972 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:11.972 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:11.972 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:11.972 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:11.972 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:11.972 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:11.972 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:11.972 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:11.972 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:11.972 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:11.972 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:11.972 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:11.972 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:11.972 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:11.972 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:11.972 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:11.972 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:11.972 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:11.972 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:11.972 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:11.972 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:11.972 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:11.972 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:11.972 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:11.972 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:11.972 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:11.972 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:11.972 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:11.972 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:11.972 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:11.972 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:11.972 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:11.972 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:11.972 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:11.972 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:11.972 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:11.972 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:11.972 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:11.972 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:11.972 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:11.972 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:11.972 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:11.972 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:11.972 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:11.972 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:11.972 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:11.972 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:11.972 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:11.972 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:11.972 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:11.972 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:11.972 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:11.972 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:11.972 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:11.972 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:11.972 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:11.972 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:11.972 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:11.972 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:11.972 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:11.972 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:11.972 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:11.972 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:11.972 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:11.972 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:11.972 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:11.972 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:11.972 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:11.972 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:11.973 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:11.973 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:11.973 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:11.973 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:11.973 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:11.973 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:11.973 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:11.973 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:11.973 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:11.973 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:11.973 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:11.973 06:28:07 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:11.973 06:28:07 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:11.973 06:28:07 -- common/autobuild_common.sh@203 -- $ cat 00:03:11.973 06:28:07 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:11.973 00:03:11.973 real 0m52.611s 00:03:11.973 user 6m15.183s 00:03:11.973 sys 0m55.764s 00:03:11.973 06:28:07 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:11.973 06:28:07 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.973 ************************************ 00:03:11.973 END TEST build_native_dpdk 00:03:11.973 ************************************ 00:03:11.973 06:28:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:11.973 06:28:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:11.973 06:28:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:11.973 06:28:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:11.973 06:28:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:11.973 06:28:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:11.973 06:28:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:11.973 06:28:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:11.973 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:12.232 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.232 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:12.232 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:12.491 Using 'verbs' RDMA provider 00:03:25.630 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:40.508 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:40.508 Creating mk/config.mk...done. 00:03:40.508 Creating mk/cc.flags.mk...done. 00:03:40.508 Type 'make' to build. 00:03:40.508 06:28:34 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:40.508 06:28:34 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:40.508 06:28:34 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:40.508 06:28:34 -- common/autotest_common.sh@10 -- $ set +x 00:03:40.508 ************************************ 00:03:40.508 START TEST make 00:03:40.508 ************************************ 00:03:40.508 06:28:34 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:40.508 make[1]: Nothing to be done for 'all'. 00:04:02.422 CC lib/ut/ut.o 00:04:02.423 CC lib/ut_mock/mock.o 00:04:02.423 CC lib/log/log.o 00:04:02.423 CC lib/log/log_flags.o 00:04:02.423 CC lib/log/log_deprecated.o 00:04:02.423 LIB libspdk_ut_mock.a 00:04:02.423 SO libspdk_ut_mock.so.5.0 00:04:02.423 LIB libspdk_log.a 00:04:02.423 LIB libspdk_ut.a 00:04:02.423 SO libspdk_ut.so.1.0 00:04:02.423 SO libspdk_log.so.6.1 00:04:02.423 SYMLINK libspdk_ut_mock.so 00:04:02.423 SYMLINK libspdk_ut.so 00:04:02.423 SYMLINK libspdk_log.so 00:04:02.423 CC lib/ioat/ioat.o 00:04:02.423 CXX lib/trace_parser/trace.o 00:04:02.423 CC lib/dma/dma.o 00:04:02.423 CC lib/util/base64.o 00:04:02.423 CC lib/util/bit_array.o 00:04:02.423 CC lib/util/crc16.o 00:04:02.423 CC lib/util/cpuset.o 00:04:02.423 CC lib/util/crc32.o 00:04:02.423 CC lib/util/crc32c.o 00:04:02.423 CC lib/vfio_user/host/vfio_user_pci.o 00:04:02.423 CC lib/util/crc32_ieee.o 00:04:02.423 CC lib/util/crc64.o 00:04:02.423 CC lib/util/dif.o 00:04:02.423 CC lib/vfio_user/host/vfio_user.o 00:04:02.423 LIB libspdk_dma.a 00:04:02.682 SO libspdk_dma.so.3.0 00:04:02.682 CC lib/util/fd.o 00:04:02.682 SYMLINK libspdk_dma.so 00:04:02.682 CC lib/util/file.o 00:04:02.682 CC lib/util/hexlify.o 00:04:02.682 LIB libspdk_ioat.a 00:04:02.682 CC lib/util/iov.o 00:04:02.682 CC lib/util/math.o 00:04:02.682 SO libspdk_ioat.so.6.0 00:04:02.682 CC lib/util/pipe.o 00:04:02.682 CC lib/util/strerror_tls.o 00:04:02.682 SYMLINK libspdk_ioat.so 00:04:02.682 CC lib/util/string.o 00:04:02.682 CC lib/util/uuid.o 00:04:02.682 LIB libspdk_vfio_user.a 00:04:02.682 SO libspdk_vfio_user.so.4.0 00:04:02.940 CC lib/util/fd_group.o 00:04:02.940 CC lib/util/xor.o 00:04:02.940 SYMLINK libspdk_vfio_user.so 00:04:02.940 CC lib/util/zipf.o 00:04:02.940 LIB libspdk_util.a 00:04:03.199 SO libspdk_util.so.8.0 00:04:03.199 SYMLINK libspdk_util.so 00:04:03.458 LIB libspdk_trace_parser.a 00:04:03.458 SO libspdk_trace_parser.so.4.0 00:04:03.458 CC lib/conf/conf.o 00:04:03.458 CC lib/json/json_parse.o 00:04:03.458 CC lib/json/json_util.o 00:04:03.458 CC lib/json/json_write.o 00:04:03.458 CC lib/env_dpdk/env.o 00:04:03.458 CC lib/env_dpdk/memory.o 00:04:03.458 CC lib/rdma/common.o 00:04:03.458 CC lib/vmd/vmd.o 00:04:03.458 CC lib/idxd/idxd.o 00:04:03.458 SYMLINK libspdk_trace_parser.so 00:04:03.458 CC lib/idxd/idxd_user.o 00:04:03.717 LIB libspdk_conf.a 00:04:03.717 SO libspdk_conf.so.5.0 00:04:03.717 CC lib/rdma/rdma_verbs.o 00:04:03.717 CC lib/vmd/led.o 00:04:03.717 LIB libspdk_json.a 00:04:03.717 CC lib/idxd/idxd_kernel.o 00:04:03.717 SYMLINK libspdk_conf.so 00:04:03.717 CC lib/env_dpdk/pci.o 00:04:03.717 CC lib/env_dpdk/init.o 00:04:03.717 SO libspdk_json.so.5.1 00:04:03.717 CC lib/env_dpdk/threads.o 00:04:03.717 SYMLINK libspdk_json.so 00:04:03.717 CC lib/env_dpdk/pci_ioat.o 00:04:03.975 CC lib/env_dpdk/pci_virtio.o 00:04:03.975 CC lib/env_dpdk/pci_vmd.o 00:04:03.975 CC lib/env_dpdk/pci_idxd.o 00:04:03.975 CC lib/env_dpdk/pci_event.o 00:04:03.975 LIB libspdk_rdma.a 00:04:03.975 SO libspdk_rdma.so.5.0 00:04:03.975 CC lib/env_dpdk/sigbus_handler.o 00:04:03.975 LIB libspdk_idxd.a 00:04:03.975 CC lib/env_dpdk/pci_dpdk.o 00:04:03.975 SO libspdk_idxd.so.11.0 00:04:03.975 SYMLINK libspdk_rdma.so 00:04:03.975 LIB libspdk_vmd.a 00:04:03.975 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:03.975 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:03.975 SO libspdk_vmd.so.5.0 00:04:04.234 SYMLINK libspdk_idxd.so 00:04:04.234 SYMLINK libspdk_vmd.so 00:04:04.234 CC lib/jsonrpc/jsonrpc_server.o 00:04:04.234 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:04.234 CC lib/jsonrpc/jsonrpc_client.o 00:04:04.234 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:04.492 LIB libspdk_jsonrpc.a 00:04:04.492 SO libspdk_jsonrpc.so.5.1 00:04:04.492 SYMLINK libspdk_jsonrpc.so 00:04:04.750 CC lib/rpc/rpc.o 00:04:04.750 LIB libspdk_env_dpdk.a 00:04:05.008 SO libspdk_env_dpdk.so.13.0 00:04:05.008 LIB libspdk_rpc.a 00:04:05.008 SO libspdk_rpc.so.5.0 00:04:05.008 SYMLINK libspdk_rpc.so 00:04:05.008 SYMLINK libspdk_env_dpdk.so 00:04:05.266 CC lib/trace/trace.o 00:04:05.266 CC lib/trace/trace_rpc.o 00:04:05.266 CC lib/trace/trace_flags.o 00:04:05.266 CC lib/sock/sock_rpc.o 00:04:05.266 CC lib/sock/sock.o 00:04:05.266 CC lib/notify/notify.o 00:04:05.266 CC lib/notify/notify_rpc.o 00:04:05.525 LIB libspdk_notify.a 00:04:05.525 SO libspdk_notify.so.5.0 00:04:05.525 LIB libspdk_trace.a 00:04:05.525 SO libspdk_trace.so.9.0 00:04:05.525 SYMLINK libspdk_notify.so 00:04:05.525 SYMLINK libspdk_trace.so 00:04:05.525 LIB libspdk_sock.a 00:04:05.525 SO libspdk_sock.so.8.0 00:04:05.784 SYMLINK libspdk_sock.so 00:04:05.784 CC lib/thread/iobuf.o 00:04:05.784 CC lib/thread/thread.o 00:04:05.784 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:05.784 CC lib/nvme/nvme_ctrlr.o 00:04:05.784 CC lib/nvme/nvme_fabric.o 00:04:05.784 CC lib/nvme/nvme_ns_cmd.o 00:04:05.784 CC lib/nvme/nvme_ns.o 00:04:05.784 CC lib/nvme/nvme_pcie_common.o 00:04:05.784 CC lib/nvme/nvme_qpair.o 00:04:05.784 CC lib/nvme/nvme_pcie.o 00:04:06.042 CC lib/nvme/nvme.o 00:04:06.607 CC lib/nvme/nvme_quirks.o 00:04:06.607 CC lib/nvme/nvme_transport.o 00:04:06.607 CC lib/nvme/nvme_discovery.o 00:04:06.607 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:06.865 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:06.865 CC lib/nvme/nvme_tcp.o 00:04:07.123 CC lib/nvme/nvme_opal.o 00:04:07.123 CC lib/nvme/nvme_io_msg.o 00:04:07.123 CC lib/nvme/nvme_poll_group.o 00:04:07.386 CC lib/nvme/nvme_zns.o 00:04:07.386 CC lib/nvme/nvme_cuse.o 00:04:07.386 LIB libspdk_thread.a 00:04:07.386 CC lib/nvme/nvme_vfio_user.o 00:04:07.386 SO libspdk_thread.so.9.0 00:04:07.386 CC lib/nvme/nvme_rdma.o 00:04:07.386 SYMLINK libspdk_thread.so 00:04:07.648 CC lib/accel/accel.o 00:04:07.648 CC lib/blob/blobstore.o 00:04:07.906 CC lib/init/json_config.o 00:04:07.906 CC lib/virtio/virtio.o 00:04:07.906 CC lib/virtio/virtio_vhost_user.o 00:04:07.906 CC lib/init/subsystem.o 00:04:08.164 CC lib/accel/accel_rpc.o 00:04:08.164 CC lib/blob/request.o 00:04:08.164 CC lib/init/subsystem_rpc.o 00:04:08.164 CC lib/init/rpc.o 00:04:08.164 CC lib/virtio/virtio_vfio_user.o 00:04:08.164 CC lib/accel/accel_sw.o 00:04:08.164 CC lib/virtio/virtio_pci.o 00:04:08.422 CC lib/blob/zeroes.o 00:04:08.422 CC lib/blob/blob_bs_dev.o 00:04:08.422 LIB libspdk_init.a 00:04:08.422 SO libspdk_init.so.4.0 00:04:08.422 SYMLINK libspdk_init.so 00:04:08.681 LIB libspdk_virtio.a 00:04:08.681 SO libspdk_virtio.so.6.0 00:04:08.681 LIB libspdk_accel.a 00:04:08.681 CC lib/event/app.o 00:04:08.681 CC lib/event/reactor.o 00:04:08.681 CC lib/event/log_rpc.o 00:04:08.681 CC lib/event/scheduler_static.o 00:04:08.681 CC lib/event/app_rpc.o 00:04:08.681 SO libspdk_accel.so.14.0 00:04:08.681 SYMLINK libspdk_virtio.so 00:04:08.681 SYMLINK libspdk_accel.so 00:04:08.939 LIB libspdk_nvme.a 00:04:08.939 CC lib/bdev/bdev_rpc.o 00:04:08.939 CC lib/bdev/bdev_zone.o 00:04:08.939 CC lib/bdev/bdev.o 00:04:08.939 CC lib/bdev/part.o 00:04:08.939 CC lib/bdev/scsi_nvme.o 00:04:08.939 SO libspdk_nvme.so.12.0 00:04:08.939 LIB libspdk_event.a 00:04:09.213 SO libspdk_event.so.12.0 00:04:09.213 SYMLINK libspdk_event.so 00:04:09.213 SYMLINK libspdk_nvme.so 00:04:10.633 LIB libspdk_blob.a 00:04:10.633 SO libspdk_blob.so.10.1 00:04:10.633 SYMLINK libspdk_blob.so 00:04:10.633 CC lib/blobfs/blobfs.o 00:04:10.633 CC lib/blobfs/tree.o 00:04:10.633 CC lib/lvol/lvol.o 00:04:11.569 LIB libspdk_bdev.a 00:04:11.569 SO libspdk_bdev.so.14.0 00:04:11.569 LIB libspdk_blobfs.a 00:04:11.569 SYMLINK libspdk_bdev.so 00:04:11.569 SO libspdk_blobfs.so.9.0 00:04:11.827 LIB libspdk_lvol.a 00:04:11.827 SYMLINK libspdk_blobfs.so 00:04:11.827 SO libspdk_lvol.so.9.1 00:04:11.827 CC lib/scsi/dev.o 00:04:11.827 CC lib/nbd/nbd.o 00:04:11.827 CC lib/scsi/lun.o 00:04:11.827 CC lib/scsi/port.o 00:04:11.827 CC lib/scsi/scsi.o 00:04:11.827 CC lib/nbd/nbd_rpc.o 00:04:11.827 CC lib/ublk/ublk.o 00:04:11.827 CC lib/ftl/ftl_core.o 00:04:11.827 CC lib/nvmf/ctrlr.o 00:04:11.827 SYMLINK libspdk_lvol.so 00:04:11.827 CC lib/scsi/scsi_bdev.o 00:04:11.827 CC lib/ftl/ftl_init.o 00:04:12.086 CC lib/scsi/scsi_pr.o 00:04:12.086 CC lib/scsi/scsi_rpc.o 00:04:12.086 CC lib/scsi/task.o 00:04:12.086 CC lib/ftl/ftl_layout.o 00:04:12.086 CC lib/ftl/ftl_debug.o 00:04:12.086 CC lib/ftl/ftl_io.o 00:04:12.086 CC lib/ftl/ftl_sb.o 00:04:12.086 LIB libspdk_nbd.a 00:04:12.345 SO libspdk_nbd.so.6.0 00:04:12.345 CC lib/ftl/ftl_l2p.o 00:04:12.345 SYMLINK libspdk_nbd.so 00:04:12.345 CC lib/ftl/ftl_l2p_flat.o 00:04:12.345 CC lib/ublk/ublk_rpc.o 00:04:12.345 LIB libspdk_scsi.a 00:04:12.345 SO libspdk_scsi.so.8.0 00:04:12.345 CC lib/nvmf/ctrlr_discovery.o 00:04:12.345 CC lib/nvmf/ctrlr_bdev.o 00:04:12.345 CC lib/ftl/ftl_nv_cache.o 00:04:12.345 CC lib/ftl/ftl_band.o 00:04:12.345 CC lib/ftl/ftl_band_ops.o 00:04:12.345 SYMLINK libspdk_scsi.so 00:04:12.603 CC lib/ftl/ftl_writer.o 00:04:12.603 LIB libspdk_ublk.a 00:04:12.603 CC lib/ftl/ftl_rq.o 00:04:12.603 CC lib/ftl/ftl_reloc.o 00:04:12.603 SO libspdk_ublk.so.2.0 00:04:12.603 SYMLINK libspdk_ublk.so 00:04:12.603 CC lib/nvmf/subsystem.o 00:04:12.603 CC lib/ftl/ftl_l2p_cache.o 00:04:12.862 CC lib/ftl/ftl_p2l.o 00:04:12.862 CC lib/ftl/mngt/ftl_mngt.o 00:04:12.862 CC lib/nvmf/nvmf.o 00:04:12.862 CC lib/iscsi/conn.o 00:04:12.862 CC lib/vhost/vhost.o 00:04:13.120 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:13.120 CC lib/iscsi/init_grp.o 00:04:13.120 CC lib/nvmf/nvmf_rpc.o 00:04:13.377 CC lib/vhost/vhost_rpc.o 00:04:13.377 CC lib/vhost/vhost_scsi.o 00:04:13.377 CC lib/vhost/vhost_blk.o 00:04:13.377 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:13.634 CC lib/nvmf/transport.o 00:04:13.635 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:13.635 CC lib/iscsi/iscsi.o 00:04:13.635 CC lib/vhost/rte_vhost_user.o 00:04:13.635 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:13.635 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:13.892 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:13.892 CC lib/iscsi/md5.o 00:04:13.892 CC lib/iscsi/param.o 00:04:14.150 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:14.151 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:14.151 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:14.151 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:14.151 CC lib/nvmf/tcp.o 00:04:14.151 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:14.151 CC lib/nvmf/rdma.o 00:04:14.409 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:14.409 CC lib/ftl/utils/ftl_conf.o 00:04:14.409 CC lib/iscsi/portal_grp.o 00:04:14.409 CC lib/ftl/utils/ftl_md.o 00:04:14.409 CC lib/ftl/utils/ftl_mempool.o 00:04:14.409 CC lib/iscsi/tgt_node.o 00:04:14.409 CC lib/ftl/utils/ftl_bitmap.o 00:04:14.667 CC lib/iscsi/iscsi_subsystem.o 00:04:14.667 CC lib/ftl/utils/ftl_property.o 00:04:14.667 CC lib/iscsi/iscsi_rpc.o 00:04:14.667 CC lib/iscsi/task.o 00:04:14.925 LIB libspdk_vhost.a 00:04:14.925 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:14.925 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:14.925 SO libspdk_vhost.so.7.1 00:04:14.925 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:14.925 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:14.925 SYMLINK libspdk_vhost.so 00:04:14.925 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:14.925 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:14.925 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:14.925 LIB libspdk_iscsi.a 00:04:15.183 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:15.183 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:15.183 SO libspdk_iscsi.so.7.0 00:04:15.183 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:15.183 CC lib/ftl/base/ftl_base_dev.o 00:04:15.183 CC lib/ftl/base/ftl_base_bdev.o 00:04:15.183 CC lib/ftl/ftl_trace.o 00:04:15.183 SYMLINK libspdk_iscsi.so 00:04:15.442 LIB libspdk_ftl.a 00:04:15.700 SO libspdk_ftl.so.8.0 00:04:15.958 SYMLINK libspdk_ftl.so 00:04:16.216 LIB libspdk_nvmf.a 00:04:16.474 SO libspdk_nvmf.so.17.0 00:04:16.474 SYMLINK libspdk_nvmf.so 00:04:16.732 CC module/env_dpdk/env_dpdk_rpc.o 00:04:16.732 CC module/sock/uring/uring.o 00:04:16.732 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:16.732 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:16.732 CC module/scheduler/gscheduler/gscheduler.o 00:04:16.732 CC module/sock/posix/posix.o 00:04:16.732 CC module/accel/error/accel_error.o 00:04:16.732 CC module/accel/dsa/accel_dsa.o 00:04:16.732 CC module/accel/ioat/accel_ioat.o 00:04:16.732 CC module/blob/bdev/blob_bdev.o 00:04:16.989 LIB libspdk_env_dpdk_rpc.a 00:04:16.989 SO libspdk_env_dpdk_rpc.so.5.0 00:04:16.989 LIB libspdk_scheduler_dpdk_governor.a 00:04:16.989 LIB libspdk_scheduler_gscheduler.a 00:04:16.989 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:16.989 SO libspdk_scheduler_gscheduler.so.3.0 00:04:16.989 SYMLINK libspdk_env_dpdk_rpc.so 00:04:16.989 CC module/accel/error/accel_error_rpc.o 00:04:16.989 LIB libspdk_scheduler_dynamic.a 00:04:16.989 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:16.989 SYMLINK libspdk_scheduler_gscheduler.so 00:04:16.989 CC module/accel/ioat/accel_ioat_rpc.o 00:04:16.989 CC module/accel/dsa/accel_dsa_rpc.o 00:04:16.989 SO libspdk_scheduler_dynamic.so.3.0 00:04:16.989 SYMLINK libspdk_scheduler_dynamic.so 00:04:16.989 LIB libspdk_blob_bdev.a 00:04:17.247 SO libspdk_blob_bdev.so.10.1 00:04:17.247 CC module/accel/iaa/accel_iaa.o 00:04:17.247 CC module/accel/iaa/accel_iaa_rpc.o 00:04:17.247 LIB libspdk_accel_error.a 00:04:17.247 SYMLINK libspdk_blob_bdev.so 00:04:17.247 LIB libspdk_accel_dsa.a 00:04:17.247 LIB libspdk_accel_ioat.a 00:04:17.247 SO libspdk_accel_error.so.1.0 00:04:17.247 SO libspdk_accel_dsa.so.4.0 00:04:17.247 SO libspdk_accel_ioat.so.5.0 00:04:17.247 SYMLINK libspdk_accel_error.so 00:04:17.247 SYMLINK libspdk_accel_dsa.so 00:04:17.247 SYMLINK libspdk_accel_ioat.so 00:04:17.247 CC module/blobfs/bdev/blobfs_bdev.o 00:04:17.247 CC module/bdev/delay/vbdev_delay.o 00:04:17.247 CC module/bdev/error/vbdev_error.o 00:04:17.504 LIB libspdk_accel_iaa.a 00:04:17.504 CC module/bdev/gpt/gpt.o 00:04:17.504 CC module/bdev/lvol/vbdev_lvol.o 00:04:17.504 SO libspdk_accel_iaa.so.2.0 00:04:17.504 CC module/bdev/malloc/bdev_malloc.o 00:04:17.504 CC module/bdev/null/bdev_null.o 00:04:17.504 SYMLINK libspdk_accel_iaa.so 00:04:17.504 CC module/bdev/error/vbdev_error_rpc.o 00:04:17.504 LIB libspdk_sock_posix.a 00:04:17.504 LIB libspdk_sock_uring.a 00:04:17.504 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:17.504 SO libspdk_sock_uring.so.4.0 00:04:17.504 SO libspdk_sock_posix.so.5.0 00:04:17.504 CC module/bdev/gpt/vbdev_gpt.o 00:04:17.762 SYMLINK libspdk_sock_uring.so 00:04:17.762 CC module/bdev/null/bdev_null_rpc.o 00:04:17.762 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:17.762 LIB libspdk_bdev_error.a 00:04:17.762 SYMLINK libspdk_sock_posix.so 00:04:17.762 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:17.762 SO libspdk_bdev_error.so.5.0 00:04:17.762 LIB libspdk_blobfs_bdev.a 00:04:17.762 SYMLINK libspdk_bdev_error.so 00:04:17.762 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:17.762 SO libspdk_blobfs_bdev.so.5.0 00:04:17.762 LIB libspdk_bdev_null.a 00:04:17.762 LIB libspdk_bdev_delay.a 00:04:17.762 CC module/bdev/nvme/bdev_nvme.o 00:04:17.762 SYMLINK libspdk_blobfs_bdev.so 00:04:17.762 SO libspdk_bdev_null.so.5.0 00:04:17.762 CC module/bdev/passthru/vbdev_passthru.o 00:04:17.762 SO libspdk_bdev_delay.so.5.0 00:04:17.762 LIB libspdk_bdev_gpt.a 00:04:18.020 SO libspdk_bdev_gpt.so.5.0 00:04:18.020 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:18.020 SYMLINK libspdk_bdev_null.so 00:04:18.020 LIB libspdk_bdev_malloc.a 00:04:18.020 SYMLINK libspdk_bdev_delay.so 00:04:18.020 CC module/bdev/raid/bdev_raid.o 00:04:18.020 CC module/bdev/split/vbdev_split.o 00:04:18.020 SO libspdk_bdev_malloc.so.5.0 00:04:18.020 SYMLINK libspdk_bdev_gpt.so 00:04:18.020 LIB libspdk_bdev_lvol.a 00:04:18.020 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:18.020 SO libspdk_bdev_lvol.so.5.0 00:04:18.020 CC module/bdev/uring/bdev_uring.o 00:04:18.020 SYMLINK libspdk_bdev_malloc.so 00:04:18.020 CC module/bdev/uring/bdev_uring_rpc.o 00:04:18.020 CC module/bdev/split/vbdev_split_rpc.o 00:04:18.278 SYMLINK libspdk_bdev_lvol.so 00:04:18.278 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:18.278 CC module/bdev/aio/bdev_aio.o 00:04:18.278 LIB libspdk_bdev_passthru.a 00:04:18.278 CC module/bdev/nvme/nvme_rpc.o 00:04:18.278 SO libspdk_bdev_passthru.so.5.0 00:04:18.278 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:18.278 SYMLINK libspdk_bdev_passthru.so 00:04:18.278 LIB libspdk_bdev_split.a 00:04:18.278 CC module/bdev/raid/bdev_raid_rpc.o 00:04:18.278 SO libspdk_bdev_split.so.5.0 00:04:18.536 SYMLINK libspdk_bdev_split.so 00:04:18.536 LIB libspdk_bdev_uring.a 00:04:18.536 CC module/bdev/ftl/bdev_ftl.o 00:04:18.536 SO libspdk_bdev_uring.so.5.0 00:04:18.536 LIB libspdk_bdev_zone_block.a 00:04:18.536 SO libspdk_bdev_zone_block.so.5.0 00:04:18.536 CC module/bdev/iscsi/bdev_iscsi.o 00:04:18.536 CC module/bdev/aio/bdev_aio_rpc.o 00:04:18.536 CC module/bdev/raid/bdev_raid_sb.o 00:04:18.536 SYMLINK libspdk_bdev_uring.so 00:04:18.536 CC module/bdev/raid/raid0.o 00:04:18.536 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:18.536 SYMLINK libspdk_bdev_zone_block.so 00:04:18.536 CC module/bdev/raid/raid1.o 00:04:18.794 LIB libspdk_bdev_aio.a 00:04:18.794 SO libspdk_bdev_aio.so.5.0 00:04:18.794 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:18.794 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:18.794 SYMLINK libspdk_bdev_aio.so 00:04:18.794 CC module/bdev/nvme/bdev_mdns_client.o 00:04:18.794 CC module/bdev/nvme/vbdev_opal.o 00:04:18.794 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:18.794 CC module/bdev/raid/concat.o 00:04:18.794 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:18.794 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:19.053 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:19.053 LIB libspdk_bdev_iscsi.a 00:04:19.053 LIB libspdk_bdev_ftl.a 00:04:19.053 SO libspdk_bdev_iscsi.so.5.0 00:04:19.053 SO libspdk_bdev_ftl.so.5.0 00:04:19.053 LIB libspdk_bdev_raid.a 00:04:19.053 SYMLINK libspdk_bdev_iscsi.so 00:04:19.053 LIB libspdk_bdev_virtio.a 00:04:19.053 SYMLINK libspdk_bdev_ftl.so 00:04:19.053 SO libspdk_bdev_raid.so.5.0 00:04:19.053 SO libspdk_bdev_virtio.so.5.0 00:04:19.312 SYMLINK libspdk_bdev_virtio.so 00:04:19.312 SYMLINK libspdk_bdev_raid.so 00:04:19.879 LIB libspdk_bdev_nvme.a 00:04:20.136 SO libspdk_bdev_nvme.so.6.0 00:04:20.136 SYMLINK libspdk_bdev_nvme.so 00:04:20.394 CC module/event/subsystems/vmd/vmd.o 00:04:20.394 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:20.394 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:20.394 CC module/event/subsystems/iobuf/iobuf.o 00:04:20.394 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:20.394 CC module/event/subsystems/sock/sock.o 00:04:20.394 CC module/event/subsystems/scheduler/scheduler.o 00:04:20.653 LIB libspdk_event_sock.a 00:04:20.653 LIB libspdk_event_scheduler.a 00:04:20.653 LIB libspdk_event_vmd.a 00:04:20.653 LIB libspdk_event_vhost_blk.a 00:04:20.653 SO libspdk_event_sock.so.4.0 00:04:20.653 SO libspdk_event_scheduler.so.3.0 00:04:20.653 LIB libspdk_event_iobuf.a 00:04:20.653 SO libspdk_event_vhost_blk.so.2.0 00:04:20.653 SO libspdk_event_vmd.so.5.0 00:04:20.653 SO libspdk_event_iobuf.so.2.0 00:04:20.653 SYMLINK libspdk_event_scheduler.so 00:04:20.653 SYMLINK libspdk_event_sock.so 00:04:20.653 SYMLINK libspdk_event_vhost_blk.so 00:04:20.653 SYMLINK libspdk_event_vmd.so 00:04:20.653 SYMLINK libspdk_event_iobuf.so 00:04:20.912 CC module/event/subsystems/accel/accel.o 00:04:20.912 LIB libspdk_event_accel.a 00:04:20.912 SO libspdk_event_accel.so.5.0 00:04:21.171 SYMLINK libspdk_event_accel.so 00:04:21.171 CC module/event/subsystems/bdev/bdev.o 00:04:21.430 LIB libspdk_event_bdev.a 00:04:21.430 SO libspdk_event_bdev.so.5.0 00:04:21.689 SYMLINK libspdk_event_bdev.so 00:04:21.689 CC module/event/subsystems/ublk/ublk.o 00:04:21.689 CC module/event/subsystems/scsi/scsi.o 00:04:21.689 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:21.689 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:21.689 CC module/event/subsystems/nbd/nbd.o 00:04:21.948 LIB libspdk_event_ublk.a 00:04:21.948 LIB libspdk_event_nbd.a 00:04:21.948 LIB libspdk_event_scsi.a 00:04:21.948 SO libspdk_event_ublk.so.2.0 00:04:21.948 SO libspdk_event_nbd.so.5.0 00:04:21.948 SO libspdk_event_scsi.so.5.0 00:04:21.948 SYMLINK libspdk_event_ublk.so 00:04:21.948 SYMLINK libspdk_event_nbd.so 00:04:21.948 LIB libspdk_event_nvmf.a 00:04:21.948 SYMLINK libspdk_event_scsi.so 00:04:21.948 SO libspdk_event_nvmf.so.5.0 00:04:22.207 SYMLINK libspdk_event_nvmf.so 00:04:22.207 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:22.207 CC module/event/subsystems/iscsi/iscsi.o 00:04:22.488 LIB libspdk_event_vhost_scsi.a 00:04:22.488 LIB libspdk_event_iscsi.a 00:04:22.488 SO libspdk_event_vhost_scsi.so.2.0 00:04:22.488 SO libspdk_event_iscsi.so.5.0 00:04:22.488 SYMLINK libspdk_event_vhost_scsi.so 00:04:22.488 SYMLINK libspdk_event_iscsi.so 00:04:22.488 SO libspdk.so.5.0 00:04:22.488 SYMLINK libspdk.so 00:04:22.756 TEST_HEADER include/spdk/accel.h 00:04:22.756 TEST_HEADER include/spdk/accel_module.h 00:04:22.756 CXX app/trace/trace.o 00:04:22.756 TEST_HEADER include/spdk/assert.h 00:04:22.756 TEST_HEADER include/spdk/barrier.h 00:04:22.756 TEST_HEADER include/spdk/base64.h 00:04:22.756 TEST_HEADER include/spdk/bdev.h 00:04:22.756 TEST_HEADER include/spdk/bdev_module.h 00:04:22.756 TEST_HEADER include/spdk/bdev_zone.h 00:04:22.756 TEST_HEADER include/spdk/bit_array.h 00:04:22.756 TEST_HEADER include/spdk/bit_pool.h 00:04:22.756 TEST_HEADER include/spdk/blob_bdev.h 00:04:22.756 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:22.756 TEST_HEADER include/spdk/blobfs.h 00:04:22.756 TEST_HEADER include/spdk/blob.h 00:04:22.756 TEST_HEADER include/spdk/conf.h 00:04:22.756 TEST_HEADER include/spdk/config.h 00:04:22.756 TEST_HEADER include/spdk/cpuset.h 00:04:22.756 TEST_HEADER include/spdk/crc16.h 00:04:22.756 TEST_HEADER include/spdk/crc32.h 00:04:22.756 TEST_HEADER include/spdk/crc64.h 00:04:22.756 TEST_HEADER include/spdk/dif.h 00:04:22.756 TEST_HEADER include/spdk/dma.h 00:04:22.756 TEST_HEADER include/spdk/endian.h 00:04:22.756 TEST_HEADER include/spdk/env_dpdk.h 00:04:22.756 TEST_HEADER include/spdk/env.h 00:04:22.756 TEST_HEADER include/spdk/event.h 00:04:22.756 TEST_HEADER include/spdk/fd_group.h 00:04:22.756 TEST_HEADER include/spdk/fd.h 00:04:22.756 CC examples/accel/perf/accel_perf.o 00:04:22.756 TEST_HEADER include/spdk/file.h 00:04:22.756 TEST_HEADER include/spdk/ftl.h 00:04:22.756 TEST_HEADER include/spdk/gpt_spec.h 00:04:22.756 TEST_HEADER include/spdk/hexlify.h 00:04:22.756 CC test/event/event_perf/event_perf.o 00:04:22.756 TEST_HEADER include/spdk/histogram_data.h 00:04:22.756 TEST_HEADER include/spdk/idxd.h 00:04:22.756 TEST_HEADER include/spdk/idxd_spec.h 00:04:22.756 TEST_HEADER include/spdk/init.h 00:04:22.756 CC test/accel/dif/dif.o 00:04:22.756 TEST_HEADER include/spdk/ioat.h 00:04:22.756 TEST_HEADER include/spdk/ioat_spec.h 00:04:22.756 CC test/dma/test_dma/test_dma.o 00:04:22.756 CC test/blobfs/mkfs/mkfs.o 00:04:22.756 TEST_HEADER include/spdk/iscsi_spec.h 00:04:22.756 TEST_HEADER include/spdk/json.h 00:04:22.756 CC test/bdev/bdevio/bdevio.o 00:04:22.756 TEST_HEADER include/spdk/jsonrpc.h 00:04:22.756 TEST_HEADER include/spdk/likely.h 00:04:22.756 TEST_HEADER include/spdk/log.h 00:04:22.756 TEST_HEADER include/spdk/lvol.h 00:04:22.756 TEST_HEADER include/spdk/memory.h 00:04:22.756 TEST_HEADER include/spdk/mmio.h 00:04:22.756 TEST_HEADER include/spdk/nbd.h 00:04:23.016 CC test/app/bdev_svc/bdev_svc.o 00:04:23.016 TEST_HEADER include/spdk/notify.h 00:04:23.016 TEST_HEADER include/spdk/nvme.h 00:04:23.016 CC test/env/mem_callbacks/mem_callbacks.o 00:04:23.016 TEST_HEADER include/spdk/nvme_intel.h 00:04:23.016 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:23.016 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:23.016 TEST_HEADER include/spdk/nvme_spec.h 00:04:23.016 TEST_HEADER include/spdk/nvme_zns.h 00:04:23.016 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:23.016 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:23.016 TEST_HEADER include/spdk/nvmf.h 00:04:23.016 TEST_HEADER include/spdk/nvmf_spec.h 00:04:23.016 TEST_HEADER include/spdk/nvmf_transport.h 00:04:23.016 TEST_HEADER include/spdk/opal.h 00:04:23.016 TEST_HEADER include/spdk/opal_spec.h 00:04:23.016 TEST_HEADER include/spdk/pci_ids.h 00:04:23.016 TEST_HEADER include/spdk/pipe.h 00:04:23.016 TEST_HEADER include/spdk/queue.h 00:04:23.016 TEST_HEADER include/spdk/reduce.h 00:04:23.016 TEST_HEADER include/spdk/rpc.h 00:04:23.016 TEST_HEADER include/spdk/scheduler.h 00:04:23.016 TEST_HEADER include/spdk/scsi.h 00:04:23.016 TEST_HEADER include/spdk/scsi_spec.h 00:04:23.016 TEST_HEADER include/spdk/sock.h 00:04:23.016 TEST_HEADER include/spdk/stdinc.h 00:04:23.016 TEST_HEADER include/spdk/string.h 00:04:23.016 TEST_HEADER include/spdk/thread.h 00:04:23.016 TEST_HEADER include/spdk/trace.h 00:04:23.016 TEST_HEADER include/spdk/trace_parser.h 00:04:23.016 TEST_HEADER include/spdk/tree.h 00:04:23.016 TEST_HEADER include/spdk/ublk.h 00:04:23.016 TEST_HEADER include/spdk/util.h 00:04:23.016 TEST_HEADER include/spdk/uuid.h 00:04:23.016 TEST_HEADER include/spdk/version.h 00:04:23.016 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:23.016 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:23.016 TEST_HEADER include/spdk/vhost.h 00:04:23.016 TEST_HEADER include/spdk/vmd.h 00:04:23.016 TEST_HEADER include/spdk/xor.h 00:04:23.016 TEST_HEADER include/spdk/zipf.h 00:04:23.016 CXX test/cpp_headers/accel.o 00:04:23.016 LINK event_perf 00:04:23.016 LINK mkfs 00:04:23.016 LINK bdev_svc 00:04:23.016 LINK mem_callbacks 00:04:23.275 CXX test/cpp_headers/accel_module.o 00:04:23.275 LINK spdk_trace 00:04:23.275 LINK dif 00:04:23.275 CC test/event/reactor/reactor.o 00:04:23.275 LINK test_dma 00:04:23.275 LINK bdevio 00:04:23.275 CXX test/cpp_headers/assert.o 00:04:23.275 LINK accel_perf 00:04:23.275 CC test/env/vtophys/vtophys.o 00:04:23.534 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:23.534 LINK reactor 00:04:23.534 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:23.534 LINK vtophys 00:04:23.534 CC app/trace_record/trace_record.o 00:04:23.534 CXX test/cpp_headers/barrier.o 00:04:23.534 CXX test/cpp_headers/base64.o 00:04:23.534 LINK env_dpdk_post_init 00:04:23.534 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:23.792 CC test/event/reactor_perf/reactor_perf.o 00:04:23.792 CC examples/bdev/hello_world/hello_bdev.o 00:04:23.792 CXX test/cpp_headers/bdev.o 00:04:23.792 CC examples/blob/hello_world/hello_blob.o 00:04:23.792 LINK spdk_trace_record 00:04:23.792 CC examples/bdev/bdevperf/bdevperf.o 00:04:23.792 CC examples/ioat/perf/perf.o 00:04:23.792 LINK reactor_perf 00:04:23.792 CC test/env/memory/memory_ut.o 00:04:23.792 LINK nvme_fuzz 00:04:24.051 CXX test/cpp_headers/bdev_module.o 00:04:24.051 LINK hello_bdev 00:04:24.051 LINK hello_blob 00:04:24.051 CC app/nvmf_tgt/nvmf_main.o 00:04:24.051 LINK ioat_perf 00:04:24.051 CC test/event/app_repeat/app_repeat.o 00:04:24.051 CXX test/cpp_headers/bdev_zone.o 00:04:24.309 CC test/event/scheduler/scheduler.o 00:04:24.309 LINK nvmf_tgt 00:04:24.309 LINK app_repeat 00:04:24.309 CC examples/ioat/verify/verify.o 00:04:24.309 CC examples/blob/cli/blobcli.o 00:04:24.309 CC examples/nvme/hello_world/hello_world.o 00:04:24.309 CXX test/cpp_headers/bit_array.o 00:04:24.309 LINK memory_ut 00:04:24.568 LINK scheduler 00:04:24.568 CXX test/cpp_headers/bit_pool.o 00:04:24.568 LINK verify 00:04:24.568 LINK bdevperf 00:04:24.568 LINK hello_world 00:04:24.568 CC app/iscsi_tgt/iscsi_tgt.o 00:04:24.568 CXX test/cpp_headers/blob_bdev.o 00:04:24.568 CC examples/sock/hello_world/hello_sock.o 00:04:24.840 CC test/env/pci/pci_ut.o 00:04:24.840 CC examples/nvme/reconnect/reconnect.o 00:04:24.840 LINK blobcli 00:04:24.840 CXX test/cpp_headers/blobfs_bdev.o 00:04:24.840 LINK iscsi_tgt 00:04:24.840 CC app/spdk_lspci/spdk_lspci.o 00:04:24.840 CC test/lvol/esnap/esnap.o 00:04:24.840 LINK hello_sock 00:04:25.101 CC app/spdk_tgt/spdk_tgt.o 00:04:25.101 LINK spdk_lspci 00:04:25.101 CXX test/cpp_headers/blobfs.o 00:04:25.101 LINK pci_ut 00:04:25.101 LINK reconnect 00:04:25.101 CC app/spdk_nvme_perf/perf.o 00:04:25.101 CC test/rpc_client/rpc_client_test.o 00:04:25.101 LINK spdk_tgt 00:04:25.359 CC test/nvme/aer/aer.o 00:04:25.359 CXX test/cpp_headers/blob.o 00:04:25.359 CC test/thread/poller_perf/poller_perf.o 00:04:25.359 LINK iscsi_fuzz 00:04:25.359 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:25.359 CXX test/cpp_headers/conf.o 00:04:25.359 LINK rpc_client_test 00:04:25.359 CC examples/nvme/arbitration/arbitration.o 00:04:25.618 LINK poller_perf 00:04:25.618 CC app/spdk_nvme_identify/identify.o 00:04:25.618 LINK aer 00:04:25.618 CXX test/cpp_headers/config.o 00:04:25.618 CXX test/cpp_headers/cpuset.o 00:04:25.618 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:25.618 CC app/spdk_nvme_discover/discovery_aer.o 00:04:25.877 CXX test/cpp_headers/crc16.o 00:04:25.877 LINK arbitration 00:04:25.877 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:25.877 CC test/nvme/reset/reset.o 00:04:25.877 CC test/nvme/sgl/sgl.o 00:04:25.877 LINK spdk_nvme_discover 00:04:25.877 LINK nvme_manage 00:04:25.877 CXX test/cpp_headers/crc32.o 00:04:26.135 CXX test/cpp_headers/crc64.o 00:04:26.135 LINK spdk_nvme_perf 00:04:26.135 CXX test/cpp_headers/dif.o 00:04:26.135 LINK reset 00:04:26.135 CC examples/nvme/hotplug/hotplug.o 00:04:26.135 LINK sgl 00:04:26.135 CC test/nvme/e2edp/nvme_dp.o 00:04:26.135 CXX test/cpp_headers/dma.o 00:04:26.393 LINK vhost_fuzz 00:04:26.393 CC test/nvme/overhead/overhead.o 00:04:26.393 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:26.393 CC examples/nvme/abort/abort.o 00:04:26.393 LINK spdk_nvme_identify 00:04:26.393 CXX test/cpp_headers/endian.o 00:04:26.393 LINK hotplug 00:04:26.393 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:26.393 LINK nvme_dp 00:04:26.652 CC test/app/histogram_perf/histogram_perf.o 00:04:26.652 LINK cmb_copy 00:04:26.652 LINK overhead 00:04:26.652 CXX test/cpp_headers/env_dpdk.o 00:04:26.652 LINK pmr_persistence 00:04:26.652 CC app/spdk_top/spdk_top.o 00:04:26.652 CC test/app/jsoncat/jsoncat.o 00:04:26.652 LINK histogram_perf 00:04:26.652 CC test/app/stub/stub.o 00:04:26.911 CXX test/cpp_headers/env.o 00:04:26.911 LINK abort 00:04:26.911 LINK jsoncat 00:04:26.911 CC test/nvme/err_injection/err_injection.o 00:04:26.911 CC test/nvme/startup/startup.o 00:04:26.911 CC test/nvme/reserve/reserve.o 00:04:26.911 CC test/nvme/simple_copy/simple_copy.o 00:04:26.911 LINK stub 00:04:26.911 CXX test/cpp_headers/event.o 00:04:27.169 LINK err_injection 00:04:27.169 LINK startup 00:04:27.169 CC examples/vmd/lsvmd/lsvmd.o 00:04:27.169 CC app/vhost/vhost.o 00:04:27.169 LINK reserve 00:04:27.169 CXX test/cpp_headers/fd_group.o 00:04:27.169 LINK simple_copy 00:04:27.169 CC test/nvme/connect_stress/connect_stress.o 00:04:27.169 LINK lsvmd 00:04:27.429 LINK vhost 00:04:27.429 CXX test/cpp_headers/fd.o 00:04:27.429 CC test/nvme/boot_partition/boot_partition.o 00:04:27.429 LINK connect_stress 00:04:27.429 CC test/nvme/compliance/nvme_compliance.o 00:04:27.429 CC examples/nvmf/nvmf/nvmf.o 00:04:27.429 CC examples/vmd/led/led.o 00:04:27.429 CXX test/cpp_headers/file.o 00:04:27.429 LINK boot_partition 00:04:27.687 LINK spdk_top 00:04:27.687 CC examples/util/zipf/zipf.o 00:04:27.687 LINK led 00:04:27.687 CC test/nvme/fused_ordering/fused_ordering.o 00:04:27.687 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:27.687 CXX test/cpp_headers/ftl.o 00:04:27.687 LINK zipf 00:04:27.687 LINK nvme_compliance 00:04:27.687 CC test/nvme/fdp/fdp.o 00:04:27.947 LINK fused_ordering 00:04:27.947 LINK nvmf 00:04:27.947 LINK doorbell_aers 00:04:27.947 CXX test/cpp_headers/gpt_spec.o 00:04:27.947 CXX test/cpp_headers/hexlify.o 00:04:27.947 CC app/spdk_dd/spdk_dd.o 00:04:27.947 CC examples/thread/thread/thread_ex.o 00:04:27.947 CXX test/cpp_headers/histogram_data.o 00:04:27.947 CXX test/cpp_headers/idxd.o 00:04:28.205 CXX test/cpp_headers/idxd_spec.o 00:04:28.205 LINK fdp 00:04:28.205 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:28.205 CC examples/idxd/perf/perf.o 00:04:28.205 CXX test/cpp_headers/init.o 00:04:28.205 CC app/fio/nvme/fio_plugin.o 00:04:28.205 LINK thread 00:04:28.205 CC test/nvme/cuse/cuse.o 00:04:28.463 CXX test/cpp_headers/ioat.o 00:04:28.463 LINK spdk_dd 00:04:28.463 CC app/fio/bdev/fio_plugin.o 00:04:28.463 LINK interrupt_tgt 00:04:28.463 CXX test/cpp_headers/ioat_spec.o 00:04:28.463 CXX test/cpp_headers/iscsi_spec.o 00:04:28.463 LINK idxd_perf 00:04:28.722 CXX test/cpp_headers/json.o 00:04:28.722 CXX test/cpp_headers/jsonrpc.o 00:04:28.722 CXX test/cpp_headers/likely.o 00:04:28.722 CXX test/cpp_headers/log.o 00:04:28.722 CXX test/cpp_headers/lvol.o 00:04:28.722 CXX test/cpp_headers/memory.o 00:04:28.722 CXX test/cpp_headers/mmio.o 00:04:28.722 CXX test/cpp_headers/nbd.o 00:04:28.722 CXX test/cpp_headers/notify.o 00:04:28.722 CXX test/cpp_headers/nvme.o 00:04:28.982 LINK spdk_nvme 00:04:28.982 CXX test/cpp_headers/nvme_intel.o 00:04:28.982 CXX test/cpp_headers/nvme_ocssd.o 00:04:28.982 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:28.982 LINK spdk_bdev 00:04:28.982 CXX test/cpp_headers/nvme_spec.o 00:04:28.982 CXX test/cpp_headers/nvme_zns.o 00:04:28.982 CXX test/cpp_headers/nvmf_cmd.o 00:04:28.982 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:28.982 CXX test/cpp_headers/nvmf.o 00:04:28.982 CXX test/cpp_headers/nvmf_spec.o 00:04:28.982 CXX test/cpp_headers/nvmf_transport.o 00:04:28.982 CXX test/cpp_headers/opal.o 00:04:29.240 CXX test/cpp_headers/opal_spec.o 00:04:29.240 CXX test/cpp_headers/pci_ids.o 00:04:29.240 CXX test/cpp_headers/pipe.o 00:04:29.240 CXX test/cpp_headers/queue.o 00:04:29.240 CXX test/cpp_headers/reduce.o 00:04:29.240 CXX test/cpp_headers/rpc.o 00:04:29.240 CXX test/cpp_headers/scheduler.o 00:04:29.240 CXX test/cpp_headers/scsi.o 00:04:29.240 CXX test/cpp_headers/scsi_spec.o 00:04:29.240 CXX test/cpp_headers/sock.o 00:04:29.240 CXX test/cpp_headers/stdinc.o 00:04:29.498 CXX test/cpp_headers/string.o 00:04:29.498 CXX test/cpp_headers/thread.o 00:04:29.498 CXX test/cpp_headers/trace.o 00:04:29.498 CXX test/cpp_headers/trace_parser.o 00:04:29.498 LINK cuse 00:04:29.498 CXX test/cpp_headers/tree.o 00:04:29.498 CXX test/cpp_headers/ublk.o 00:04:29.498 CXX test/cpp_headers/util.o 00:04:29.498 CXX test/cpp_headers/uuid.o 00:04:29.498 CXX test/cpp_headers/version.o 00:04:29.498 CXX test/cpp_headers/vfio_user_pci.o 00:04:29.498 CXX test/cpp_headers/vfio_user_spec.o 00:04:29.498 CXX test/cpp_headers/vhost.o 00:04:29.498 CXX test/cpp_headers/vmd.o 00:04:29.498 CXX test/cpp_headers/xor.o 00:04:29.769 CXX test/cpp_headers/zipf.o 00:04:30.030 LINK esnap 00:04:30.289 ************************************ 00:04:30.289 END TEST make 00:04:30.289 ************************************ 00:04:30.289 00:04:30.289 real 0m51.504s 00:04:30.289 user 4m59.296s 00:04:30.289 sys 0m57.042s 00:04:30.289 06:29:25 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:30.289 06:29:25 -- common/autotest_common.sh@10 -- $ set +x 00:04:30.548 06:29:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:30.548 06:29:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:30.548 06:29:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:30.548 06:29:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:30.548 06:29:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:30.548 06:29:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:30.548 06:29:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:30.548 06:29:25 -- scripts/common.sh@335 -- # IFS=.-: 00:04:30.548 06:29:25 -- scripts/common.sh@335 -- # read -ra ver1 00:04:30.548 06:29:25 -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.548 06:29:25 -- scripts/common.sh@336 -- # read -ra ver2 00:04:30.548 06:29:25 -- scripts/common.sh@337 -- # local 'op=<' 00:04:30.548 06:29:25 -- scripts/common.sh@339 -- # ver1_l=2 00:04:30.548 06:29:25 -- scripts/common.sh@340 -- # ver2_l=1 00:04:30.548 06:29:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:30.548 06:29:25 -- scripts/common.sh@343 -- # case "$op" in 00:04:30.548 06:29:25 -- scripts/common.sh@344 -- # : 1 00:04:30.548 06:29:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:30.548 06:29:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.548 06:29:25 -- scripts/common.sh@364 -- # decimal 1 00:04:30.548 06:29:25 -- scripts/common.sh@352 -- # local d=1 00:04:30.548 06:29:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.548 06:29:25 -- scripts/common.sh@354 -- # echo 1 00:04:30.548 06:29:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:30.548 06:29:25 -- scripts/common.sh@365 -- # decimal 2 00:04:30.548 06:29:25 -- scripts/common.sh@352 -- # local d=2 00:04:30.548 06:29:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.549 06:29:25 -- scripts/common.sh@354 -- # echo 2 00:04:30.549 06:29:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:30.549 06:29:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:30.549 06:29:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:30.549 06:29:25 -- scripts/common.sh@367 -- # return 0 00:04:30.549 06:29:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.549 06:29:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:30.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.549 --rc genhtml_branch_coverage=1 00:04:30.549 --rc genhtml_function_coverage=1 00:04:30.549 --rc genhtml_legend=1 00:04:30.549 --rc geninfo_all_blocks=1 00:04:30.549 --rc geninfo_unexecuted_blocks=1 00:04:30.549 00:04:30.549 ' 00:04:30.549 06:29:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:30.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.549 --rc genhtml_branch_coverage=1 00:04:30.549 --rc genhtml_function_coverage=1 00:04:30.549 --rc genhtml_legend=1 00:04:30.549 --rc geninfo_all_blocks=1 00:04:30.549 --rc geninfo_unexecuted_blocks=1 00:04:30.549 00:04:30.549 ' 00:04:30.549 06:29:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:30.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.549 --rc genhtml_branch_coverage=1 00:04:30.549 --rc genhtml_function_coverage=1 00:04:30.549 --rc genhtml_legend=1 00:04:30.549 --rc geninfo_all_blocks=1 00:04:30.549 --rc geninfo_unexecuted_blocks=1 00:04:30.549 00:04:30.549 ' 00:04:30.549 06:29:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:30.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.549 --rc genhtml_branch_coverage=1 00:04:30.549 --rc genhtml_function_coverage=1 00:04:30.549 --rc genhtml_legend=1 00:04:30.549 --rc geninfo_all_blocks=1 00:04:30.549 --rc geninfo_unexecuted_blocks=1 00:04:30.549 00:04:30.549 ' 00:04:30.549 06:29:25 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:30.549 06:29:25 -- nvmf/common.sh@7 -- # uname -s 00:04:30.549 06:29:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.549 06:29:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.549 06:29:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.549 06:29:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.549 06:29:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.549 06:29:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.549 06:29:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.549 06:29:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.549 06:29:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.549 06:29:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.549 06:29:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:04:30.549 06:29:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:04:30.549 06:29:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.549 06:29:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.549 06:29:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:30.549 06:29:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:30.549 06:29:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.549 06:29:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.549 06:29:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.549 06:29:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.549 06:29:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.549 06:29:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.549 06:29:25 -- paths/export.sh@5 -- # export PATH 00:04:30.549 06:29:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.549 06:29:25 -- nvmf/common.sh@46 -- # : 0 00:04:30.549 06:29:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:30.549 06:29:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:30.549 06:29:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:30.549 06:29:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.549 06:29:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.549 06:29:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:30.549 06:29:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:30.549 06:29:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:30.549 06:29:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:30.549 06:29:25 -- spdk/autotest.sh@32 -- # uname -s 00:04:30.549 06:29:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:30.549 06:29:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:30.549 06:29:25 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:30.549 06:29:25 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:30.549 06:29:25 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:30.549 06:29:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:30.549 06:29:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:30.549 06:29:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:30.549 06:29:25 -- spdk/autotest.sh@48 -- # udevadm_pid=59774 00:04:30.549 06:29:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:30.549 06:29:26 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:30.808 06:29:26 -- spdk/autotest.sh@54 -- # echo 59788 00:04:30.808 06:29:26 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:30.808 06:29:26 -- spdk/autotest.sh@56 -- # echo 59797 00:04:30.808 06:29:26 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:30.808 06:29:26 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:30.808 06:29:26 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:30.808 06:29:26 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:30.808 06:29:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.808 06:29:26 -- common/autotest_common.sh@10 -- # set +x 00:04:30.808 06:29:26 -- spdk/autotest.sh@70 -- # create_test_list 00:04:30.808 06:29:26 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:30.808 06:29:26 -- common/autotest_common.sh@10 -- # set +x 00:04:30.808 06:29:26 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:30.808 06:29:26 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:30.808 06:29:26 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:30.808 06:29:26 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:30.808 06:29:26 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:30.808 06:29:26 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:30.808 06:29:26 -- common/autotest_common.sh@1450 -- # uname 00:04:30.808 06:29:26 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:30.808 06:29:26 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:30.808 06:29:26 -- common/autotest_common.sh@1470 -- # uname 00:04:30.808 06:29:26 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:30.808 06:29:26 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:30.808 06:29:26 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:30.808 lcov: LCOV version 1.15 00:04:30.808 06:29:26 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:38.924 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:38.924 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:38.924 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:38.924 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:38.924 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:38.924 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:00.885 06:29:55 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:00.885 06:29:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:00.885 06:29:55 -- common/autotest_common.sh@10 -- # set +x 00:05:00.885 06:29:55 -- spdk/autotest.sh@89 -- # rm -f 00:05:00.885 06:29:55 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:00.885 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.144 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:01.144 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:01.144 06:29:56 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:01.144 06:29:56 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:01.144 06:29:56 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:01.144 06:29:56 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:01.144 06:29:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:01.144 06:29:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:01.144 06:29:56 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:01.144 06:29:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:01.144 06:29:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:01.144 06:29:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:01.144 06:29:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:01.144 06:29:56 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:01.144 06:29:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:01.144 06:29:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:01.144 06:29:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:01.144 06:29:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:01.144 06:29:56 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:01.144 06:29:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:01.144 06:29:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:01.144 06:29:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:01.144 06:29:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:01.144 06:29:56 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:01.144 06:29:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:01.144 06:29:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:01.144 06:29:56 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:01.144 06:29:56 -- spdk/autotest.sh@108 -- # grep -v p 00:05:01.144 06:29:56 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:01.144 06:29:56 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:01.144 06:29:56 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:01.144 06:29:56 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:01.144 06:29:56 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:01.144 06:29:56 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:01.144 No valid GPT data, bailing 00:05:01.144 06:29:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:01.144 06:29:56 -- scripts/common.sh@393 -- # pt= 00:05:01.144 06:29:56 -- scripts/common.sh@394 -- # return 1 00:05:01.144 06:29:56 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:01.144 1+0 records in 00:05:01.144 1+0 records out 00:05:01.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472107 s, 222 MB/s 00:05:01.144 06:29:56 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:01.144 06:29:56 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:01.144 06:29:56 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:05:01.144 06:29:56 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:01.144 06:29:56 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:01.144 No valid GPT data, bailing 00:05:01.144 06:29:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:01.144 06:29:56 -- scripts/common.sh@393 -- # pt= 00:05:01.144 06:29:56 -- scripts/common.sh@394 -- # return 1 00:05:01.144 06:29:56 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:01.144 1+0 records in 00:05:01.144 1+0 records out 00:05:01.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00331937 s, 316 MB/s 00:05:01.144 06:29:56 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:01.144 06:29:56 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:01.144 06:29:56 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:05:01.144 06:29:56 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:01.144 06:29:56 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:01.403 No valid GPT data, bailing 00:05:01.403 06:29:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:01.403 06:29:56 -- scripts/common.sh@393 -- # pt= 00:05:01.403 06:29:56 -- scripts/common.sh@394 -- # return 1 00:05:01.403 06:29:56 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:01.403 1+0 records in 00:05:01.403 1+0 records out 00:05:01.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422557 s, 248 MB/s 00:05:01.403 06:29:56 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:01.403 06:29:56 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:01.403 06:29:56 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:05:01.403 06:29:56 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:01.403 06:29:56 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:01.403 No valid GPT data, bailing 00:05:01.404 06:29:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:01.404 06:29:56 -- scripts/common.sh@393 -- # pt= 00:05:01.404 06:29:56 -- scripts/common.sh@394 -- # return 1 00:05:01.404 06:29:56 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:01.404 1+0 records in 00:05:01.404 1+0 records out 00:05:01.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00357632 s, 293 MB/s 00:05:01.404 06:29:56 -- spdk/autotest.sh@116 -- # sync 00:05:01.663 06:29:57 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:01.663 06:29:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:01.663 06:29:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:03.594 06:29:58 -- spdk/autotest.sh@122 -- # uname -s 00:05:03.594 06:29:58 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:03.594 06:29:58 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:03.594 06:29:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.594 06:29:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.594 06:29:58 -- common/autotest_common.sh@10 -- # set +x 00:05:03.594 ************************************ 00:05:03.594 START TEST setup.sh 00:05:03.594 ************************************ 00:05:03.594 06:29:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:03.594 * Looking for test storage... 00:05:03.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:03.594 06:29:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:03.594 06:29:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:03.594 06:29:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:03.594 06:29:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:03.594 06:29:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:03.594 06:29:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:03.594 06:29:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:03.594 06:29:59 -- scripts/common.sh@335 -- # IFS=.-: 00:05:03.594 06:29:59 -- scripts/common.sh@335 -- # read -ra ver1 00:05:03.594 06:29:59 -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.594 06:29:59 -- scripts/common.sh@336 -- # read -ra ver2 00:05:03.594 06:29:59 -- scripts/common.sh@337 -- # local 'op=<' 00:05:03.594 06:29:59 -- scripts/common.sh@339 -- # ver1_l=2 00:05:03.594 06:29:59 -- scripts/common.sh@340 -- # ver2_l=1 00:05:03.594 06:29:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:03.594 06:29:59 -- scripts/common.sh@343 -- # case "$op" in 00:05:03.594 06:29:59 -- scripts/common.sh@344 -- # : 1 00:05:03.594 06:29:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:03.594 06:29:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.594 06:29:59 -- scripts/common.sh@364 -- # decimal 1 00:05:03.594 06:29:59 -- scripts/common.sh@352 -- # local d=1 00:05:03.594 06:29:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.594 06:29:59 -- scripts/common.sh@354 -- # echo 1 00:05:03.594 06:29:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:03.853 06:29:59 -- scripts/common.sh@365 -- # decimal 2 00:05:03.853 06:29:59 -- scripts/common.sh@352 -- # local d=2 00:05:03.853 06:29:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.853 06:29:59 -- scripts/common.sh@354 -- # echo 2 00:05:03.853 06:29:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:03.853 06:29:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:03.853 06:29:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:03.853 06:29:59 -- scripts/common.sh@367 -- # return 0 00:05:03.853 06:29:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.853 06:29:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:03.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.853 --rc genhtml_branch_coverage=1 00:05:03.853 --rc genhtml_function_coverage=1 00:05:03.853 --rc genhtml_legend=1 00:05:03.853 --rc geninfo_all_blocks=1 00:05:03.853 --rc geninfo_unexecuted_blocks=1 00:05:03.853 00:05:03.853 ' 00:05:03.853 06:29:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:03.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.853 --rc genhtml_branch_coverage=1 00:05:03.853 --rc genhtml_function_coverage=1 00:05:03.853 --rc genhtml_legend=1 00:05:03.853 --rc geninfo_all_blocks=1 00:05:03.853 --rc geninfo_unexecuted_blocks=1 00:05:03.853 00:05:03.853 ' 00:05:03.853 06:29:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:03.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.853 --rc genhtml_branch_coverage=1 00:05:03.853 --rc genhtml_function_coverage=1 00:05:03.853 --rc genhtml_legend=1 00:05:03.853 --rc geninfo_all_blocks=1 00:05:03.853 --rc geninfo_unexecuted_blocks=1 00:05:03.853 00:05:03.853 ' 00:05:03.853 06:29:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:03.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.853 --rc genhtml_branch_coverage=1 00:05:03.853 --rc genhtml_function_coverage=1 00:05:03.853 --rc genhtml_legend=1 00:05:03.853 --rc geninfo_all_blocks=1 00:05:03.853 --rc geninfo_unexecuted_blocks=1 00:05:03.853 00:05:03.853 ' 00:05:03.853 06:29:59 -- setup/test-setup.sh@10 -- # uname -s 00:05:03.853 06:29:59 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:03.853 06:29:59 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:03.853 06:29:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.853 06:29:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.853 06:29:59 -- common/autotest_common.sh@10 -- # set +x 00:05:03.853 ************************************ 00:05:03.853 START TEST acl 00:05:03.853 ************************************ 00:05:03.853 06:29:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:03.853 * Looking for test storage... 00:05:03.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:03.853 06:29:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:03.853 06:29:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:03.853 06:29:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:03.853 06:29:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:03.853 06:29:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:03.853 06:29:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:03.853 06:29:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:03.853 06:29:59 -- scripts/common.sh@335 -- # IFS=.-: 00:05:03.853 06:29:59 -- scripts/common.sh@335 -- # read -ra ver1 00:05:03.853 06:29:59 -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.854 06:29:59 -- scripts/common.sh@336 -- # read -ra ver2 00:05:03.854 06:29:59 -- scripts/common.sh@337 -- # local 'op=<' 00:05:03.854 06:29:59 -- scripts/common.sh@339 -- # ver1_l=2 00:05:03.854 06:29:59 -- scripts/common.sh@340 -- # ver2_l=1 00:05:03.854 06:29:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:03.854 06:29:59 -- scripts/common.sh@343 -- # case "$op" in 00:05:03.854 06:29:59 -- scripts/common.sh@344 -- # : 1 00:05:03.854 06:29:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:03.854 06:29:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.854 06:29:59 -- scripts/common.sh@364 -- # decimal 1 00:05:03.854 06:29:59 -- scripts/common.sh@352 -- # local d=1 00:05:03.854 06:29:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.854 06:29:59 -- scripts/common.sh@354 -- # echo 1 00:05:03.854 06:29:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:03.854 06:29:59 -- scripts/common.sh@365 -- # decimal 2 00:05:03.854 06:29:59 -- scripts/common.sh@352 -- # local d=2 00:05:03.854 06:29:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.854 06:29:59 -- scripts/common.sh@354 -- # echo 2 00:05:03.854 06:29:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:03.854 06:29:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:03.854 06:29:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:03.854 06:29:59 -- scripts/common.sh@367 -- # return 0 00:05:03.854 06:29:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.854 06:29:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:03.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.854 --rc genhtml_branch_coverage=1 00:05:03.854 --rc genhtml_function_coverage=1 00:05:03.854 --rc genhtml_legend=1 00:05:03.854 --rc geninfo_all_blocks=1 00:05:03.854 --rc geninfo_unexecuted_blocks=1 00:05:03.854 00:05:03.854 ' 00:05:03.854 06:29:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:03.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.854 --rc genhtml_branch_coverage=1 00:05:03.854 --rc genhtml_function_coverage=1 00:05:03.854 --rc genhtml_legend=1 00:05:03.854 --rc geninfo_all_blocks=1 00:05:03.854 --rc geninfo_unexecuted_blocks=1 00:05:03.854 00:05:03.854 ' 00:05:03.854 06:29:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:03.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.854 --rc genhtml_branch_coverage=1 00:05:03.854 --rc genhtml_function_coverage=1 00:05:03.854 --rc genhtml_legend=1 00:05:03.854 --rc geninfo_all_blocks=1 00:05:03.854 --rc geninfo_unexecuted_blocks=1 00:05:03.854 00:05:03.854 ' 00:05:03.854 06:29:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:03.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.854 --rc genhtml_branch_coverage=1 00:05:03.854 --rc genhtml_function_coverage=1 00:05:03.854 --rc genhtml_legend=1 00:05:03.854 --rc geninfo_all_blocks=1 00:05:03.854 --rc geninfo_unexecuted_blocks=1 00:05:03.854 00:05:03.854 ' 00:05:03.854 06:29:59 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:03.854 06:29:59 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:03.854 06:29:59 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:03.854 06:29:59 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:03.854 06:29:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:03.854 06:29:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:03.854 06:29:59 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:03.854 06:29:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:03.854 06:29:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:03.854 06:29:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:03.854 06:29:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:03.854 06:29:59 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:03.854 06:29:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:03.854 06:29:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:03.854 06:29:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:03.854 06:29:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:03.854 06:29:59 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:03.854 06:29:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:03.854 06:29:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:03.854 06:29:59 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:03.854 06:29:59 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:03.854 06:29:59 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:03.854 06:29:59 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:03.854 06:29:59 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:03.854 06:29:59 -- setup/acl.sh@12 -- # devs=() 00:05:03.854 06:29:59 -- setup/acl.sh@12 -- # declare -a devs 00:05:03.854 06:29:59 -- setup/acl.sh@13 -- # drivers=() 00:05:03.854 06:29:59 -- setup/acl.sh@13 -- # declare -A drivers 00:05:03.854 06:29:59 -- setup/acl.sh@51 -- # setup reset 00:05:03.854 06:29:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:03.854 06:29:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.791 06:29:59 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:04.791 06:29:59 -- setup/acl.sh@16 -- # local dev driver 00:05:04.791 06:29:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.791 06:29:59 -- setup/acl.sh@15 -- # setup output status 00:05:04.791 06:29:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.791 06:30:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:04.791 Hugepages 00:05:04.791 node hugesize free / total 00:05:04.791 06:30:00 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:04.791 06:30:00 -- setup/acl.sh@19 -- # continue 00:05:04.791 06:30:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.791 00:05:04.791 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:04.791 06:30:00 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:04.791 06:30:00 -- setup/acl.sh@19 -- # continue 00:05:04.791 06:30:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.791 06:30:00 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:04.791 06:30:00 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:04.791 06:30:00 -- setup/acl.sh@20 -- # continue 00:05:04.791 06:30:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.049 06:30:00 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:05.049 06:30:00 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:05.049 06:30:00 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:05.049 06:30:00 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:05.049 06:30:00 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:05.049 06:30:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.049 06:30:00 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:05.049 06:30:00 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:05.049 06:30:00 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:05.049 06:30:00 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:05.049 06:30:00 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:05.049 06:30:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.049 06:30:00 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:05.049 06:30:00 -- setup/acl.sh@54 -- # run_test denied denied 00:05:05.049 06:30:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.049 06:30:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.049 06:30:00 -- common/autotest_common.sh@10 -- # set +x 00:05:05.049 ************************************ 00:05:05.049 START TEST denied 00:05:05.049 ************************************ 00:05:05.049 06:30:00 -- common/autotest_common.sh@1114 -- # denied 00:05:05.049 06:30:00 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:05.049 06:30:00 -- setup/acl.sh@38 -- # setup output config 00:05:05.049 06:30:00 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:05.049 06:30:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.049 06:30:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:05.984 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:05.984 06:30:01 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:05.984 06:30:01 -- setup/acl.sh@28 -- # local dev driver 00:05:05.984 06:30:01 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:05.984 06:30:01 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:05.984 06:30:01 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:05.984 06:30:01 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:05.984 06:30:01 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:05.984 06:30:01 -- setup/acl.sh@41 -- # setup reset 00:05:05.984 06:30:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.984 06:30:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.551 ************************************ 00:05:06.551 END TEST denied 00:05:06.551 ************************************ 00:05:06.551 00:05:06.551 real 0m1.448s 00:05:06.551 user 0m0.594s 00:05:06.551 sys 0m0.796s 00:05:06.551 06:30:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.551 06:30:01 -- common/autotest_common.sh@10 -- # set +x 00:05:06.551 06:30:01 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:06.551 06:30:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.551 06:30:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.551 06:30:01 -- common/autotest_common.sh@10 -- # set +x 00:05:06.551 ************************************ 00:05:06.551 START TEST allowed 00:05:06.551 ************************************ 00:05:06.551 06:30:01 -- common/autotest_common.sh@1114 -- # allowed 00:05:06.551 06:30:01 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:06.551 06:30:01 -- setup/acl.sh@45 -- # setup output config 00:05:06.551 06:30:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.551 06:30:01 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:06.551 06:30:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:07.487 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:07.487 06:30:02 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:07.487 06:30:02 -- setup/acl.sh@28 -- # local dev driver 00:05:07.487 06:30:02 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:07.487 06:30:02 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:07.487 06:30:02 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:07.487 06:30:02 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:07.487 06:30:02 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:07.487 06:30:02 -- setup/acl.sh@48 -- # setup reset 00:05:07.487 06:30:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.487 06:30:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:08.055 ************************************ 00:05:08.055 END TEST allowed 00:05:08.055 ************************************ 00:05:08.055 00:05:08.055 real 0m1.490s 00:05:08.055 user 0m0.658s 00:05:08.055 sys 0m0.843s 00:05:08.055 06:30:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.055 06:30:03 -- common/autotest_common.sh@10 -- # set +x 00:05:08.055 ************************************ 00:05:08.055 END TEST acl 00:05:08.055 ************************************ 00:05:08.055 00:05:08.055 real 0m4.309s 00:05:08.055 user 0m1.906s 00:05:08.055 sys 0m2.382s 00:05:08.055 06:30:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.055 06:30:03 -- common/autotest_common.sh@10 -- # set +x 00:05:08.055 06:30:03 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:08.055 06:30:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.055 06:30:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.055 06:30:03 -- common/autotest_common.sh@10 -- # set +x 00:05:08.055 ************************************ 00:05:08.055 START TEST hugepages 00:05:08.055 ************************************ 00:05:08.055 06:30:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:08.055 * Looking for test storage... 00:05:08.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:08.315 06:30:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:08.315 06:30:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:08.315 06:30:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:08.315 06:30:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:08.315 06:30:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:08.315 06:30:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:08.316 06:30:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:08.316 06:30:03 -- scripts/common.sh@335 -- # IFS=.-: 00:05:08.316 06:30:03 -- scripts/common.sh@335 -- # read -ra ver1 00:05:08.316 06:30:03 -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.316 06:30:03 -- scripts/common.sh@336 -- # read -ra ver2 00:05:08.316 06:30:03 -- scripts/common.sh@337 -- # local 'op=<' 00:05:08.316 06:30:03 -- scripts/common.sh@339 -- # ver1_l=2 00:05:08.316 06:30:03 -- scripts/common.sh@340 -- # ver2_l=1 00:05:08.316 06:30:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:08.316 06:30:03 -- scripts/common.sh@343 -- # case "$op" in 00:05:08.316 06:30:03 -- scripts/common.sh@344 -- # : 1 00:05:08.316 06:30:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:08.316 06:30:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.316 06:30:03 -- scripts/common.sh@364 -- # decimal 1 00:05:08.316 06:30:03 -- scripts/common.sh@352 -- # local d=1 00:05:08.316 06:30:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.316 06:30:03 -- scripts/common.sh@354 -- # echo 1 00:05:08.316 06:30:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:08.316 06:30:03 -- scripts/common.sh@365 -- # decimal 2 00:05:08.316 06:30:03 -- scripts/common.sh@352 -- # local d=2 00:05:08.316 06:30:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.316 06:30:03 -- scripts/common.sh@354 -- # echo 2 00:05:08.316 06:30:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:08.316 06:30:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:08.316 06:30:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:08.316 06:30:03 -- scripts/common.sh@367 -- # return 0 00:05:08.316 06:30:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.316 06:30:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:08.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.316 --rc genhtml_branch_coverage=1 00:05:08.316 --rc genhtml_function_coverage=1 00:05:08.316 --rc genhtml_legend=1 00:05:08.316 --rc geninfo_all_blocks=1 00:05:08.316 --rc geninfo_unexecuted_blocks=1 00:05:08.316 00:05:08.316 ' 00:05:08.316 06:30:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:08.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.316 --rc genhtml_branch_coverage=1 00:05:08.316 --rc genhtml_function_coverage=1 00:05:08.316 --rc genhtml_legend=1 00:05:08.316 --rc geninfo_all_blocks=1 00:05:08.316 --rc geninfo_unexecuted_blocks=1 00:05:08.316 00:05:08.316 ' 00:05:08.316 06:30:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:08.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.316 --rc genhtml_branch_coverage=1 00:05:08.316 --rc genhtml_function_coverage=1 00:05:08.316 --rc genhtml_legend=1 00:05:08.316 --rc geninfo_all_blocks=1 00:05:08.316 --rc geninfo_unexecuted_blocks=1 00:05:08.316 00:05:08.316 ' 00:05:08.316 06:30:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:08.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.316 --rc genhtml_branch_coverage=1 00:05:08.316 --rc genhtml_function_coverage=1 00:05:08.316 --rc genhtml_legend=1 00:05:08.316 --rc geninfo_all_blocks=1 00:05:08.316 --rc geninfo_unexecuted_blocks=1 00:05:08.316 00:05:08.316 ' 00:05:08.316 06:30:03 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:08.316 06:30:03 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:08.316 06:30:03 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:08.316 06:30:03 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:08.316 06:30:03 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:08.316 06:30:03 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:08.316 06:30:03 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:08.316 06:30:03 -- setup/common.sh@18 -- # local node= 00:05:08.316 06:30:03 -- setup/common.sh@19 -- # local var val 00:05:08.316 06:30:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:08.316 06:30:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.316 06:30:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.316 06:30:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.316 06:30:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.316 06:30:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 4813392 kB' 'MemAvailable: 7326312 kB' 'Buffers: 2684 kB' 'Cached: 2717644 kB' 'SwapCached: 0 kB' 'Active: 458912 kB' 'Inactive: 2378280 kB' 'Active(anon): 127376 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 118472 kB' 'Mapped: 53860 kB' 'Shmem: 10512 kB' 'KReclaimable: 80508 kB' 'Slab: 180244 kB' 'SReclaimable: 80508 kB' 'SUnreclaim: 99736 kB' 'KernelStack: 6736 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 320044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.316 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.316 06:30:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # continue 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:08.317 06:30:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:08.317 06:30:03 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.317 06:30:03 -- setup/common.sh@33 -- # echo 2048 00:05:08.317 06:30:03 -- setup/common.sh@33 -- # return 0 00:05:08.317 06:30:03 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:08.317 06:30:03 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:08.317 06:30:03 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:08.317 06:30:03 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:08.317 06:30:03 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:08.317 06:30:03 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:08.317 06:30:03 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:08.317 06:30:03 -- setup/hugepages.sh@207 -- # get_nodes 00:05:08.317 06:30:03 -- setup/hugepages.sh@27 -- # local node 00:05:08.317 06:30:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.317 06:30:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:08.318 06:30:03 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:08.318 06:30:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.318 06:30:03 -- setup/hugepages.sh@208 -- # clear_hp 00:05:08.318 06:30:03 -- setup/hugepages.sh@37 -- # local node hp 00:05:08.318 06:30:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:08.318 06:30:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:08.318 06:30:03 -- setup/hugepages.sh@41 -- # echo 0 00:05:08.318 06:30:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:08.318 06:30:03 -- setup/hugepages.sh@41 -- # echo 0 00:05:08.318 06:30:03 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:08.318 06:30:03 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:08.318 06:30:03 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:08.318 06:30:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.318 06:30:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.318 06:30:03 -- common/autotest_common.sh@10 -- # set +x 00:05:08.318 ************************************ 00:05:08.318 START TEST default_setup 00:05:08.318 ************************************ 00:05:08.318 06:30:03 -- common/autotest_common.sh@1114 -- # default_setup 00:05:08.318 06:30:03 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:08.318 06:30:03 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:08.318 06:30:03 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:08.318 06:30:03 -- setup/hugepages.sh@51 -- # shift 00:05:08.318 06:30:03 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:08.318 06:30:03 -- setup/hugepages.sh@52 -- # local node_ids 00:05:08.318 06:30:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:08.318 06:30:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:08.318 06:30:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:08.318 06:30:03 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:08.318 06:30:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:08.318 06:30:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:08.318 06:30:03 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:08.318 06:30:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:08.318 06:30:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:08.318 06:30:03 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:08.318 06:30:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:08.318 06:30:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:08.318 06:30:03 -- setup/hugepages.sh@73 -- # return 0 00:05:08.318 06:30:03 -- setup/hugepages.sh@137 -- # setup output 00:05:08.318 06:30:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.318 06:30:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.886 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.147 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.147 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.147 06:30:04 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:09.147 06:30:04 -- setup/hugepages.sh@89 -- # local node 00:05:09.147 06:30:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.147 06:30:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.147 06:30:04 -- setup/hugepages.sh@92 -- # local surp 00:05:09.147 06:30:04 -- setup/hugepages.sh@93 -- # local resv 00:05:09.147 06:30:04 -- setup/hugepages.sh@94 -- # local anon 00:05:09.147 06:30:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.147 06:30:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.147 06:30:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.147 06:30:04 -- setup/common.sh@18 -- # local node= 00:05:09.147 06:30:04 -- setup/common.sh@19 -- # local var val 00:05:09.147 06:30:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.147 06:30:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.147 06:30:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.147 06:30:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.147 06:30:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.147 06:30:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.147 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.147 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6871156 kB' 'MemAvailable: 9384056 kB' 'Buffers: 2684 kB' 'Cached: 2717624 kB' 'SwapCached: 0 kB' 'Active: 460288 kB' 'Inactive: 2378284 kB' 'Active(anon): 128752 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119860 kB' 'Mapped: 53808 kB' 'Shmem: 10488 kB' 'KReclaimable: 80464 kB' 'Slab: 180180 kB' 'SReclaimable: 80464 kB' 'SUnreclaim: 99716 kB' 'KernelStack: 6760 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 06:30:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 06:30:04 -- setup/common.sh@33 -- # echo 0 00:05:09.149 06:30:04 -- setup/common.sh@33 -- # return 0 00:05:09.149 06:30:04 -- setup/hugepages.sh@97 -- # anon=0 00:05:09.149 06:30:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.149 06:30:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.149 06:30:04 -- setup/common.sh@18 -- # local node= 00:05:09.149 06:30:04 -- setup/common.sh@19 -- # local var val 00:05:09.149 06:30:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.149 06:30:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.149 06:30:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.149 06:30:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.149 06:30:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.149 06:30:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6871156 kB' 'MemAvailable: 9384056 kB' 'Buffers: 2684 kB' 'Cached: 2717624 kB' 'SwapCached: 0 kB' 'Active: 460284 kB' 'Inactive: 2378284 kB' 'Active(anon): 128748 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119844 kB' 'Mapped: 53756 kB' 'Shmem: 10488 kB' 'KReclaimable: 80464 kB' 'Slab: 180180 kB' 'SReclaimable: 80464 kB' 'SUnreclaim: 99716 kB' 'KernelStack: 6712 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.412 06:30:04 -- setup/common.sh@33 -- # echo 0 00:05:09.412 06:30:04 -- setup/common.sh@33 -- # return 0 00:05:09.412 06:30:04 -- setup/hugepages.sh@99 -- # surp=0 00:05:09.412 06:30:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.412 06:30:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.412 06:30:04 -- setup/common.sh@18 -- # local node= 00:05:09.412 06:30:04 -- setup/common.sh@19 -- # local var val 00:05:09.412 06:30:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.412 06:30:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.412 06:30:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.412 06:30:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.412 06:30:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.412 06:30:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6871284 kB' 'MemAvailable: 9384192 kB' 'Buffers: 2684 kB' 'Cached: 2717632 kB' 'SwapCached: 0 kB' 'Active: 459768 kB' 'Inactive: 2378292 kB' 'Active(anon): 128232 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119348 kB' 'Mapped: 53656 kB' 'Shmem: 10488 kB' 'KReclaimable: 80464 kB' 'Slab: 180272 kB' 'SReclaimable: 80464 kB' 'SUnreclaim: 99808 kB' 'KernelStack: 6744 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.412 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.412 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.413 06:30:04 -- setup/common.sh@33 -- # echo 0 00:05:09.413 06:30:04 -- setup/common.sh@33 -- # return 0 00:05:09.413 06:30:04 -- setup/hugepages.sh@100 -- # resv=0 00:05:09.413 nr_hugepages=1024 00:05:09.413 06:30:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:09.413 resv_hugepages=0 00:05:09.413 06:30:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.413 surplus_hugepages=0 00:05:09.413 06:30:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.413 anon_hugepages=0 00:05:09.413 06:30:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.413 06:30:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.413 06:30:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:09.413 06:30:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.413 06:30:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.413 06:30:04 -- setup/common.sh@18 -- # local node= 00:05:09.413 06:30:04 -- setup/common.sh@19 -- # local var val 00:05:09.413 06:30:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.413 06:30:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.413 06:30:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.413 06:30:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.413 06:30:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.413 06:30:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.413 06:30:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6871284 kB' 'MemAvailable: 9384192 kB' 'Buffers: 2684 kB' 'Cached: 2717632 kB' 'SwapCached: 0 kB' 'Active: 459968 kB' 'Inactive: 2378292 kB' 'Active(anon): 128432 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119548 kB' 'Mapped: 53656 kB' 'Shmem: 10488 kB' 'KReclaimable: 80464 kB' 'Slab: 180268 kB' 'SReclaimable: 80464 kB' 'SUnreclaim: 99804 kB' 'KernelStack: 6728 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.413 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.413 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.414 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 06:30:04 -- setup/common.sh@33 -- # echo 1024 00:05:09.415 06:30:04 -- setup/common.sh@33 -- # return 0 00:05:09.415 06:30:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.415 06:30:04 -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.415 06:30:04 -- setup/hugepages.sh@27 -- # local node 00:05:09.415 06:30:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.415 06:30:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:09.415 06:30:04 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:09.415 06:30:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.415 06:30:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.415 06:30:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.415 06:30:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.415 06:30:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.415 06:30:04 -- setup/common.sh@18 -- # local node=0 00:05:09.415 06:30:04 -- setup/common.sh@19 -- # local var val 00:05:09.415 06:30:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.415 06:30:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.415 06:30:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.415 06:30:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.415 06:30:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.415 06:30:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6871284 kB' 'MemUsed: 5367828 kB' 'SwapCached: 0 kB' 'Active: 459760 kB' 'Inactive: 2378292 kB' 'Active(anon): 128224 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 2720316 kB' 'Mapped: 53656 kB' 'AnonPages: 119288 kB' 'Shmem: 10488 kB' 'KernelStack: 6712 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80464 kB' 'Slab: 180268 kB' 'SReclaimable: 80464 kB' 'SUnreclaim: 99804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 06:30:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # continue 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 06:30:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 06:30:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 06:30:04 -- setup/common.sh@33 -- # echo 0 00:05:09.416 06:30:04 -- setup/common.sh@33 -- # return 0 00:05:09.416 06:30:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.416 06:30:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.416 06:30:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.416 06:30:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.416 node0=1024 expecting 1024 00:05:09.416 06:30:04 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:09.416 06:30:04 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:09.416 00:05:09.416 real 0m0.987s 00:05:09.416 user 0m0.474s 00:05:09.416 sys 0m0.438s 00:05:09.416 06:30:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.416 06:30:04 -- common/autotest_common.sh@10 -- # set +x 00:05:09.416 ************************************ 00:05:09.416 END TEST default_setup 00:05:09.416 ************************************ 00:05:09.416 06:30:04 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:09.416 06:30:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.416 06:30:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.416 06:30:04 -- common/autotest_common.sh@10 -- # set +x 00:05:09.416 ************************************ 00:05:09.416 START TEST per_node_1G_alloc 00:05:09.416 ************************************ 00:05:09.416 06:30:04 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:09.416 06:30:04 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:09.416 06:30:04 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:09.416 06:30:04 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:09.416 06:30:04 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:09.416 06:30:04 -- setup/hugepages.sh@51 -- # shift 00:05:09.416 06:30:04 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:09.416 06:30:04 -- setup/hugepages.sh@52 -- # local node_ids 00:05:09.416 06:30:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.416 06:30:04 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:09.416 06:30:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:09.416 06:30:04 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:09.416 06:30:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.416 06:30:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:09.416 06:30:04 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:09.416 06:30:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.416 06:30:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.416 06:30:04 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:09.416 06:30:04 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:09.416 06:30:04 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:09.416 06:30:04 -- setup/hugepages.sh@73 -- # return 0 00:05:09.416 06:30:04 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:09.416 06:30:04 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:09.416 06:30:04 -- setup/hugepages.sh@146 -- # setup output 00:05:09.416 06:30:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.416 06:30:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:09.676 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.676 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:09.676 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:09.676 06:30:05 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:09.676 06:30:05 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:09.676 06:30:05 -- setup/hugepages.sh@89 -- # local node 00:05:09.676 06:30:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.676 06:30:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.676 06:30:05 -- setup/hugepages.sh@92 -- # local surp 00:05:09.676 06:30:05 -- setup/hugepages.sh@93 -- # local resv 00:05:09.676 06:30:05 -- setup/hugepages.sh@94 -- # local anon 00:05:09.676 06:30:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.676 06:30:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.676 06:30:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.676 06:30:05 -- setup/common.sh@18 -- # local node= 00:05:09.676 06:30:05 -- setup/common.sh@19 -- # local var val 00:05:09.676 06:30:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.676 06:30:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.676 06:30:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.676 06:30:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.676 06:30:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.676 06:30:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.676 06:30:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7930684 kB' 'MemAvailable: 10443488 kB' 'Buffers: 2684 kB' 'Cached: 2717632 kB' 'SwapCached: 0 kB' 'Active: 460300 kB' 'Inactive: 2378292 kB' 'Active(anon): 128764 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119876 kB' 'Mapped: 53768 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180072 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99816 kB' 'KernelStack: 6696 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.676 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.676 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.677 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.677 06:30:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.677 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.677 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.677 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.677 06:30:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.677 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.677 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.677 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.677 06:30:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.677 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.677 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.677 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.677 06:30:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.677 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.677 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.677 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.677 06:30:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.677 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.677 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.677 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.677 06:30:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.677 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.677 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.677 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.677 06:30:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.938 06:30:05 -- setup/common.sh@33 -- # echo 0 00:05:09.938 06:30:05 -- setup/common.sh@33 -- # return 0 00:05:09.938 06:30:05 -- setup/hugepages.sh@97 -- # anon=0 00:05:09.938 06:30:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.938 06:30:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.938 06:30:05 -- setup/common.sh@18 -- # local node= 00:05:09.938 06:30:05 -- setup/common.sh@19 -- # local var val 00:05:09.938 06:30:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.938 06:30:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.938 06:30:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.938 06:30:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.938 06:30:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.938 06:30:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7930684 kB' 'MemAvailable: 10443488 kB' 'Buffers: 2684 kB' 'Cached: 2717632 kB' 'SwapCached: 0 kB' 'Active: 460000 kB' 'Inactive: 2378292 kB' 'Active(anon): 128464 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119636 kB' 'Mapped: 54028 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180072 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99816 kB' 'KernelStack: 6712 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.938 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.938 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.939 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.939 06:30:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.940 06:30:05 -- setup/common.sh@33 -- # echo 0 00:05:09.940 06:30:05 -- setup/common.sh@33 -- # return 0 00:05:09.940 06:30:05 -- setup/hugepages.sh@99 -- # surp=0 00:05:09.940 06:30:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.940 06:30:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.940 06:30:05 -- setup/common.sh@18 -- # local node= 00:05:09.940 06:30:05 -- setup/common.sh@19 -- # local var val 00:05:09.940 06:30:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.940 06:30:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.940 06:30:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.940 06:30:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.940 06:30:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.940 06:30:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7930936 kB' 'MemAvailable: 10443740 kB' 'Buffers: 2684 kB' 'Cached: 2717632 kB' 'SwapCached: 0 kB' 'Active: 459736 kB' 'Inactive: 2378292 kB' 'Active(anon): 128200 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119412 kB' 'Mapped: 53640 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180072 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99816 kB' 'KernelStack: 6704 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.940 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.940 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.941 06:30:05 -- setup/common.sh@33 -- # echo 0 00:05:09.941 06:30:05 -- setup/common.sh@33 -- # return 0 00:05:09.941 06:30:05 -- setup/hugepages.sh@100 -- # resv=0 00:05:09.941 nr_hugepages=512 00:05:09.941 06:30:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:09.941 resv_hugepages=0 00:05:09.941 06:30:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.941 surplus_hugepages=0 00:05:09.941 06:30:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.941 anon_hugepages=0 00:05:09.941 06:30:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.941 06:30:05 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:09.941 06:30:05 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:09.941 06:30:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.941 06:30:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.941 06:30:05 -- setup/common.sh@18 -- # local node= 00:05:09.941 06:30:05 -- setup/common.sh@19 -- # local var val 00:05:09.941 06:30:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.941 06:30:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.941 06:30:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.941 06:30:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.941 06:30:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.941 06:30:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7930936 kB' 'MemAvailable: 10443740 kB' 'Buffers: 2684 kB' 'Cached: 2717632 kB' 'SwapCached: 0 kB' 'Active: 459704 kB' 'Inactive: 2378292 kB' 'Active(anon): 128168 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119380 kB' 'Mapped: 53636 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180072 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99816 kB' 'KernelStack: 6720 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.941 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.941 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.942 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.942 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.943 06:30:05 -- setup/common.sh@33 -- # echo 512 00:05:09.943 06:30:05 -- setup/common.sh@33 -- # return 0 00:05:09.943 06:30:05 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:09.943 06:30:05 -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.943 06:30:05 -- setup/hugepages.sh@27 -- # local node 00:05:09.943 06:30:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.943 06:30:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:09.943 06:30:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:09.943 06:30:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.943 06:30:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.943 06:30:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.943 06:30:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.943 06:30:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.943 06:30:05 -- setup/common.sh@18 -- # local node=0 00:05:09.943 06:30:05 -- setup/common.sh@19 -- # local var val 00:05:09.943 06:30:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.943 06:30:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.943 06:30:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.943 06:30:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.943 06:30:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.943 06:30:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.943 06:30:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7930936 kB' 'MemUsed: 4308176 kB' 'SwapCached: 0 kB' 'Active: 459880 kB' 'Inactive: 2378292 kB' 'Active(anon): 128344 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 2720316 kB' 'Mapped: 53636 kB' 'AnonPages: 119504 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80256 kB' 'Slab: 180072 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.943 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.943 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # continue 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.944 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.944 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.944 06:30:05 -- setup/common.sh@33 -- # echo 0 00:05:09.944 06:30:05 -- setup/common.sh@33 -- # return 0 00:05:09.944 06:30:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.944 06:30:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.944 06:30:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.944 06:30:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.944 node0=512 expecting 512 00:05:09.944 06:30:05 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:09.944 06:30:05 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:09.944 00:05:09.944 real 0m0.537s 00:05:09.944 user 0m0.263s 00:05:09.944 sys 0m0.307s 00:05:09.944 06:30:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.944 06:30:05 -- common/autotest_common.sh@10 -- # set +x 00:05:09.944 ************************************ 00:05:09.944 END TEST per_node_1G_alloc 00:05:09.944 ************************************ 00:05:09.944 06:30:05 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:09.944 06:30:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.944 06:30:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.944 06:30:05 -- common/autotest_common.sh@10 -- # set +x 00:05:09.944 ************************************ 00:05:09.944 START TEST even_2G_alloc 00:05:09.944 ************************************ 00:05:09.944 06:30:05 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:09.944 06:30:05 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:09.944 06:30:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:09.944 06:30:05 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:09.944 06:30:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.944 06:30:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:09.944 06:30:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:09.944 06:30:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:09.944 06:30:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.944 06:30:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:09.944 06:30:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:09.944 06:30:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.944 06:30:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.944 06:30:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:09.944 06:30:05 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:09.944 06:30:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:09.944 06:30:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:09.944 06:30:05 -- setup/hugepages.sh@83 -- # : 0 00:05:09.944 06:30:05 -- setup/hugepages.sh@84 -- # : 0 00:05:09.944 06:30:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:09.944 06:30:05 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:09.944 06:30:05 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:09.944 06:30:05 -- setup/hugepages.sh@153 -- # setup output 00:05:09.944 06:30:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.944 06:30:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.490 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:10.490 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:10.490 06:30:05 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:10.490 06:30:05 -- setup/hugepages.sh@89 -- # local node 00:05:10.490 06:30:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.490 06:30:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.490 06:30:05 -- setup/hugepages.sh@92 -- # local surp 00:05:10.490 06:30:05 -- setup/hugepages.sh@93 -- # local resv 00:05:10.490 06:30:05 -- setup/hugepages.sh@94 -- # local anon 00:05:10.490 06:30:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.490 06:30:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.490 06:30:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.490 06:30:05 -- setup/common.sh@18 -- # local node= 00:05:10.490 06:30:05 -- setup/common.sh@19 -- # local var val 00:05:10.490 06:30:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:10.490 06:30:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.490 06:30:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.490 06:30:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.490 06:30:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.490 06:30:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.490 06:30:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6881668 kB' 'MemAvailable: 9394472 kB' 'Buffers: 2684 kB' 'Cached: 2717632 kB' 'SwapCached: 0 kB' 'Active: 460472 kB' 'Inactive: 2378292 kB' 'Active(anon): 128936 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119804 kB' 'Mapped: 53656 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180044 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99788 kB' 'KernelStack: 6680 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.490 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.490 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.491 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.491 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.492 06:30:05 -- setup/common.sh@33 -- # echo 0 00:05:10.492 06:30:05 -- setup/common.sh@33 -- # return 0 00:05:10.492 06:30:05 -- setup/hugepages.sh@97 -- # anon=0 00:05:10.492 06:30:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.492 06:30:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.492 06:30:05 -- setup/common.sh@18 -- # local node= 00:05:10.492 06:30:05 -- setup/common.sh@19 -- # local var val 00:05:10.492 06:30:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:10.492 06:30:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.492 06:30:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.492 06:30:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.492 06:30:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.492 06:30:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6881416 kB' 'MemAvailable: 9394220 kB' 'Buffers: 2684 kB' 'Cached: 2717632 kB' 'SwapCached: 0 kB' 'Active: 460240 kB' 'Inactive: 2378292 kB' 'Active(anon): 128704 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119788 kB' 'Mapped: 53656 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180044 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99788 kB' 'KernelStack: 6648 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.492 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.492 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.493 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.493 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.493 06:30:05 -- setup/common.sh@33 -- # echo 0 00:05:10.493 06:30:05 -- setup/common.sh@33 -- # return 0 00:05:10.493 06:30:05 -- setup/hugepages.sh@99 -- # surp=0 00:05:10.493 06:30:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.493 06:30:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.493 06:30:05 -- setup/common.sh@18 -- # local node= 00:05:10.493 06:30:05 -- setup/common.sh@19 -- # local var val 00:05:10.493 06:30:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:10.493 06:30:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.493 06:30:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.493 06:30:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.493 06:30:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.494 06:30:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6881416 kB' 'MemAvailable: 9394220 kB' 'Buffers: 2684 kB' 'Cached: 2717632 kB' 'SwapCached: 0 kB' 'Active: 460036 kB' 'Inactive: 2378292 kB' 'Active(anon): 128500 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119692 kB' 'Mapped: 53764 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180048 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99792 kB' 'KernelStack: 6712 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.494 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.494 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.495 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.495 06:30:05 -- setup/common.sh@33 -- # echo 0 00:05:10.495 06:30:05 -- setup/common.sh@33 -- # return 0 00:05:10.495 06:30:05 -- setup/hugepages.sh@100 -- # resv=0 00:05:10.495 nr_hugepages=1024 00:05:10.495 06:30:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:10.495 resv_hugepages=0 00:05:10.495 06:30:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.495 surplus_hugepages=0 00:05:10.495 06:30:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.495 anon_hugepages=0 00:05:10.495 06:30:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.495 06:30:05 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.495 06:30:05 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:10.495 06:30:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.495 06:30:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.495 06:30:05 -- setup/common.sh@18 -- # local node= 00:05:10.495 06:30:05 -- setup/common.sh@19 -- # local var val 00:05:10.495 06:30:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:10.495 06:30:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.495 06:30:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.495 06:30:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.495 06:30:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.495 06:30:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.495 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6881416 kB' 'MemAvailable: 9394220 kB' 'Buffers: 2684 kB' 'Cached: 2717632 kB' 'SwapCached: 0 kB' 'Active: 460160 kB' 'Inactive: 2378292 kB' 'Active(anon): 128624 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119852 kB' 'Mapped: 53636 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180048 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99792 kB' 'KernelStack: 6736 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.496 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.496 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.497 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.497 06:30:05 -- setup/common.sh@33 -- # echo 1024 00:05:10.497 06:30:05 -- setup/common.sh@33 -- # return 0 00:05:10.497 06:30:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.497 06:30:05 -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.497 06:30:05 -- setup/hugepages.sh@27 -- # local node 00:05:10.497 06:30:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.497 06:30:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.497 06:30:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:10.497 06:30:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.497 06:30:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.497 06:30:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.497 06:30:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.497 06:30:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.497 06:30:05 -- setup/common.sh@18 -- # local node=0 00:05:10.497 06:30:05 -- setup/common.sh@19 -- # local var val 00:05:10.497 06:30:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:10.497 06:30:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.497 06:30:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.497 06:30:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.497 06:30:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.497 06:30:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.497 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6881720 kB' 'MemUsed: 5357392 kB' 'SwapCached: 0 kB' 'Active: 459960 kB' 'Inactive: 2378292 kB' 'Active(anon): 128424 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378292 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 2720316 kB' 'Mapped: 53636 kB' 'AnonPages: 119560 kB' 'Shmem: 10488 kB' 'KernelStack: 6672 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80256 kB' 'Slab: 180032 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.498 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.498 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.499 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 06:30:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.499 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.499 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 06:30:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.499 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.499 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 06:30:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.499 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.499 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.499 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.499 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.499 06:30:05 -- setup/common.sh@32 -- # continue 00:05:10.499 06:30:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.499 06:30:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.499 06:30:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.499 06:30:05 -- setup/common.sh@33 -- # echo 0 00:05:10.499 06:30:05 -- setup/common.sh@33 -- # return 0 00:05:10.499 06:30:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.499 06:30:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.499 06:30:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.499 06:30:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.499 node0=1024 expecting 1024 00:05:10.499 06:30:05 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:10.499 06:30:05 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:10.499 00:05:10.499 real 0m0.538s 00:05:10.499 user 0m0.289s 00:05:10.499 sys 0m0.282s 00:05:10.499 06:30:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.499 06:30:05 -- common/autotest_common.sh@10 -- # set +x 00:05:10.499 ************************************ 00:05:10.499 END TEST even_2G_alloc 00:05:10.499 ************************************ 00:05:10.499 06:30:05 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:10.499 06:30:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.499 06:30:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.499 06:30:05 -- common/autotest_common.sh@10 -- # set +x 00:05:10.499 ************************************ 00:05:10.499 START TEST odd_alloc 00:05:10.499 ************************************ 00:05:10.499 06:30:05 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:10.499 06:30:05 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:10.499 06:30:05 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:10.499 06:30:05 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:10.499 06:30:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:10.499 06:30:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:10.499 06:30:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:10.499 06:30:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:10.499 06:30:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.499 06:30:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:10.499 06:30:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:10.499 06:30:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.499 06:30:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.499 06:30:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:10.499 06:30:05 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:10.499 06:30:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:10.499 06:30:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:10.499 06:30:05 -- setup/hugepages.sh@83 -- # : 0 00:05:10.499 06:30:05 -- setup/hugepages.sh@84 -- # : 0 00:05:10.499 06:30:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:10.499 06:30:05 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:10.499 06:30:05 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:10.499 06:30:05 -- setup/hugepages.sh@160 -- # setup output 00:05:10.499 06:30:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.499 06:30:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.071 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.071 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.071 06:30:06 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:11.071 06:30:06 -- setup/hugepages.sh@89 -- # local node 00:05:11.071 06:30:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.071 06:30:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.071 06:30:06 -- setup/hugepages.sh@92 -- # local surp 00:05:11.071 06:30:06 -- setup/hugepages.sh@93 -- # local resv 00:05:11.071 06:30:06 -- setup/hugepages.sh@94 -- # local anon 00:05:11.071 06:30:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.071 06:30:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.071 06:30:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.071 06:30:06 -- setup/common.sh@18 -- # local node= 00:05:11.071 06:30:06 -- setup/common.sh@19 -- # local var val 00:05:11.071 06:30:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.072 06:30:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.072 06:30:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.072 06:30:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.072 06:30:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.072 06:30:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6895820 kB' 'MemAvailable: 9408628 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 460148 kB' 'Inactive: 2378296 kB' 'Active(anon): 128612 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119764 kB' 'Mapped: 53764 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180032 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99776 kB' 'KernelStack: 6680 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.072 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.072 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.073 06:30:06 -- setup/common.sh@33 -- # echo 0 00:05:11.073 06:30:06 -- setup/common.sh@33 -- # return 0 00:05:11.073 06:30:06 -- setup/hugepages.sh@97 -- # anon=0 00:05:11.073 06:30:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.073 06:30:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.073 06:30:06 -- setup/common.sh@18 -- # local node= 00:05:11.073 06:30:06 -- setup/common.sh@19 -- # local var val 00:05:11.073 06:30:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.073 06:30:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.073 06:30:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.073 06:30:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.073 06:30:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.073 06:30:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6895820 kB' 'MemAvailable: 9408628 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 460020 kB' 'Inactive: 2378296 kB' 'Active(anon): 128484 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119632 kB' 'Mapped: 53636 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180036 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99780 kB' 'KernelStack: 6704 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.073 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.073 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.074 06:30:06 -- setup/common.sh@33 -- # echo 0 00:05:11.074 06:30:06 -- setup/common.sh@33 -- # return 0 00:05:11.074 06:30:06 -- setup/hugepages.sh@99 -- # surp=0 00:05:11.074 06:30:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.074 06:30:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.074 06:30:06 -- setup/common.sh@18 -- # local node= 00:05:11.074 06:30:06 -- setup/common.sh@19 -- # local var val 00:05:11.074 06:30:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.074 06:30:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.074 06:30:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.074 06:30:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.074 06:30:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.074 06:30:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6895820 kB' 'MemAvailable: 9408628 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 459768 kB' 'Inactive: 2378296 kB' 'Active(anon): 128232 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119372 kB' 'Mapped: 53636 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180032 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99776 kB' 'KernelStack: 6704 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.074 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.074 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.075 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.075 06:30:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.076 06:30:06 -- setup/common.sh@33 -- # echo 0 00:05:11.076 06:30:06 -- setup/common.sh@33 -- # return 0 00:05:11.076 06:30:06 -- setup/hugepages.sh@100 -- # resv=0 00:05:11.076 06:30:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:11.076 nr_hugepages=1025 00:05:11.076 resv_hugepages=0 00:05:11.076 06:30:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.076 surplus_hugepages=0 00:05:11.076 06:30:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.076 anon_hugepages=0 00:05:11.076 06:30:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.076 06:30:06 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:11.076 06:30:06 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:11.076 06:30:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.076 06:30:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.076 06:30:06 -- setup/common.sh@18 -- # local node= 00:05:11.076 06:30:06 -- setup/common.sh@19 -- # local var val 00:05:11.076 06:30:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.076 06:30:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.076 06:30:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.076 06:30:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.076 06:30:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.076 06:30:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6895820 kB' 'MemAvailable: 9408628 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 459728 kB' 'Inactive: 2378296 kB' 'Active(anon): 128192 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119332 kB' 'Mapped: 53636 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180032 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99776 kB' 'KernelStack: 6688 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.076 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.076 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.077 06:30:06 -- setup/common.sh@33 -- # echo 1025 00:05:11.077 06:30:06 -- setup/common.sh@33 -- # return 0 00:05:11.077 06:30:06 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:11.077 06:30:06 -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.077 06:30:06 -- setup/hugepages.sh@27 -- # local node 00:05:11.077 06:30:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.077 06:30:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:11.077 06:30:06 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:11.077 06:30:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.077 06:30:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.077 06:30:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.077 06:30:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.077 06:30:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.077 06:30:06 -- setup/common.sh@18 -- # local node=0 00:05:11.077 06:30:06 -- setup/common.sh@19 -- # local var val 00:05:11.077 06:30:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.077 06:30:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.077 06:30:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.077 06:30:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.077 06:30:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.077 06:30:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6895820 kB' 'MemUsed: 5343292 kB' 'SwapCached: 0 kB' 'Active: 459956 kB' 'Inactive: 2378296 kB' 'Active(anon): 128420 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 2720320 kB' 'Mapped: 53636 kB' 'AnonPages: 119528 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80256 kB' 'Slab: 180032 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.077 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.077 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.078 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.078 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.078 06:30:06 -- setup/common.sh@33 -- # echo 0 00:05:11.078 06:30:06 -- setup/common.sh@33 -- # return 0 00:05:11.078 06:30:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.078 06:30:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.078 06:30:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.078 06:30:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.078 node0=1025 expecting 1025 00:05:11.078 ************************************ 00:05:11.078 END TEST odd_alloc 00:05:11.078 ************************************ 00:05:11.078 06:30:06 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:11.078 06:30:06 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:11.078 00:05:11.078 real 0m0.538s 00:05:11.078 user 0m0.249s 00:05:11.078 sys 0m0.313s 00:05:11.078 06:30:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.078 06:30:06 -- common/autotest_common.sh@10 -- # set +x 00:05:11.078 06:30:06 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:11.078 06:30:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.078 06:30:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.078 06:30:06 -- common/autotest_common.sh@10 -- # set +x 00:05:11.078 ************************************ 00:05:11.078 START TEST custom_alloc 00:05:11.078 ************************************ 00:05:11.078 06:30:06 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:11.078 06:30:06 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:11.078 06:30:06 -- setup/hugepages.sh@169 -- # local node 00:05:11.078 06:30:06 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:11.078 06:30:06 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:11.078 06:30:06 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:11.079 06:30:06 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:11.079 06:30:06 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:11.079 06:30:06 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:11.079 06:30:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.079 06:30:06 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:11.079 06:30:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:11.079 06:30:06 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:11.079 06:30:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.079 06:30:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:11.079 06:30:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:11.079 06:30:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.079 06:30:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.079 06:30:06 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:11.079 06:30:06 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:11.079 06:30:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.079 06:30:06 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:11.079 06:30:06 -- setup/hugepages.sh@83 -- # : 0 00:05:11.079 06:30:06 -- setup/hugepages.sh@84 -- # : 0 00:05:11.079 06:30:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.079 06:30:06 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:11.079 06:30:06 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:11.079 06:30:06 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:11.079 06:30:06 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:11.079 06:30:06 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:11.079 06:30:06 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:11.079 06:30:06 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:11.079 06:30:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.079 06:30:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:11.079 06:30:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:11.079 06:30:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.079 06:30:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.079 06:30:06 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:11.079 06:30:06 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:11.079 06:30:06 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:11.079 06:30:06 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:11.079 06:30:06 -- setup/hugepages.sh@78 -- # return 0 00:05:11.079 06:30:06 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:11.079 06:30:06 -- setup/hugepages.sh@187 -- # setup output 00:05:11.079 06:30:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.079 06:30:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.651 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.651 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.651 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.651 06:30:06 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:11.651 06:30:06 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:11.651 06:30:06 -- setup/hugepages.sh@89 -- # local node 00:05:11.651 06:30:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.651 06:30:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.651 06:30:06 -- setup/hugepages.sh@92 -- # local surp 00:05:11.651 06:30:06 -- setup/hugepages.sh@93 -- # local resv 00:05:11.651 06:30:06 -- setup/hugepages.sh@94 -- # local anon 00:05:11.651 06:30:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.651 06:30:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.651 06:30:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.651 06:30:06 -- setup/common.sh@18 -- # local node= 00:05:11.651 06:30:06 -- setup/common.sh@19 -- # local var val 00:05:11.651 06:30:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.651 06:30:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.651 06:30:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.651 06:30:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.651 06:30:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.651 06:30:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.651 06:30:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7944268 kB' 'MemAvailable: 10457076 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 460124 kB' 'Inactive: 2378296 kB' 'Active(anon): 128588 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119896 kB' 'Mapped: 53888 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180036 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99780 kB' 'KernelStack: 6664 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.651 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.651 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.652 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.652 06:30:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.652 06:30:06 -- setup/common.sh@33 -- # echo 0 00:05:11.652 06:30:06 -- setup/common.sh@33 -- # return 0 00:05:11.652 06:30:06 -- setup/hugepages.sh@97 -- # anon=0 00:05:11.652 06:30:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.652 06:30:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.652 06:30:06 -- setup/common.sh@18 -- # local node= 00:05:11.652 06:30:06 -- setup/common.sh@19 -- # local var val 00:05:11.653 06:30:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.653 06:30:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.653 06:30:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.653 06:30:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.653 06:30:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.653 06:30:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7952796 kB' 'MemAvailable: 10465604 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 460044 kB' 'Inactive: 2378296 kB' 'Active(anon): 128508 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119636 kB' 'Mapped: 53644 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180040 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99784 kB' 'KernelStack: 6704 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.653 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.653 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.654 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.654 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.654 06:30:06 -- setup/common.sh@33 -- # echo 0 00:05:11.654 06:30:06 -- setup/common.sh@33 -- # return 0 00:05:11.654 06:30:06 -- setup/hugepages.sh@99 -- # surp=0 00:05:11.655 06:30:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.655 06:30:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.655 06:30:06 -- setup/common.sh@18 -- # local node= 00:05:11.655 06:30:06 -- setup/common.sh@19 -- # local var val 00:05:11.655 06:30:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.655 06:30:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.655 06:30:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.655 06:30:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.655 06:30:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.655 06:30:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7952796 kB' 'MemAvailable: 10465604 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 460044 kB' 'Inactive: 2378296 kB' 'Active(anon): 128508 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119588 kB' 'Mapped: 53644 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180040 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99784 kB' 'KernelStack: 6688 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.655 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.655 06:30:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # continue 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.656 06:30:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.656 06:30:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.656 06:30:06 -- setup/common.sh@33 -- # echo 0 00:05:11.656 06:30:06 -- setup/common.sh@33 -- # return 0 00:05:11.656 06:30:07 -- setup/hugepages.sh@100 -- # resv=0 00:05:11.656 06:30:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:11.657 nr_hugepages=512 00:05:11.657 resv_hugepages=0 00:05:11.657 surplus_hugepages=0 00:05:11.657 anon_hugepages=0 00:05:11.657 06:30:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.657 06:30:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.657 06:30:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.657 06:30:07 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:11.657 06:30:07 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:11.657 06:30:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.657 06:30:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.657 06:30:07 -- setup/common.sh@18 -- # local node= 00:05:11.657 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:11.657 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.657 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.657 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.657 06:30:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.657 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.657 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.657 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7952796 kB' 'MemAvailable: 10465604 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 459984 kB' 'Inactive: 2378296 kB' 'Active(anon): 128448 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119588 kB' 'Mapped: 53644 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180036 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99780 kB' 'KernelStack: 6688 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.657 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.657 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.658 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.658 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.659 06:30:07 -- setup/common.sh@33 -- # echo 512 00:05:11.659 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:11.659 06:30:07 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:11.659 06:30:07 -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.659 06:30:07 -- setup/hugepages.sh@27 -- # local node 00:05:11.659 06:30:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.659 06:30:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:11.659 06:30:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:11.659 06:30:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.659 06:30:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.659 06:30:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.659 06:30:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.659 06:30:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.659 06:30:07 -- setup/common.sh@18 -- # local node=0 00:05:11.659 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:11.659 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.659 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.659 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.659 06:30:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.659 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.659 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7952796 kB' 'MemUsed: 4286316 kB' 'SwapCached: 0 kB' 'Active: 460060 kB' 'Inactive: 2378296 kB' 'Active(anon): 128524 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 2720320 kB' 'Mapped: 53644 kB' 'AnonPages: 119608 kB' 'Shmem: 10488 kB' 'KernelStack: 6672 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80256 kB' 'Slab: 180036 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.659 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.659 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # continue 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.660 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.660 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.660 06:30:07 -- setup/common.sh@33 -- # echo 0 00:05:11.660 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:11.660 06:30:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.660 06:30:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.660 node0=512 expecting 512 00:05:11.660 06:30:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.660 06:30:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.660 06:30:07 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:11.660 06:30:07 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:11.660 00:05:11.660 real 0m0.598s 00:05:11.660 user 0m0.290s 00:05:11.660 sys 0m0.298s 00:05:11.660 06:30:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.660 ************************************ 00:05:11.660 END TEST custom_alloc 00:05:11.660 ************************************ 00:05:11.660 06:30:07 -- common/autotest_common.sh@10 -- # set +x 00:05:11.920 06:30:07 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:11.920 06:30:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.920 06:30:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.920 06:30:07 -- common/autotest_common.sh@10 -- # set +x 00:05:11.920 ************************************ 00:05:11.920 START TEST no_shrink_alloc 00:05:11.920 ************************************ 00:05:11.920 06:30:07 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:11.920 06:30:07 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:11.920 06:30:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:11.920 06:30:07 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:11.920 06:30:07 -- setup/hugepages.sh@51 -- # shift 00:05:11.920 06:30:07 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:11.920 06:30:07 -- setup/hugepages.sh@52 -- # local node_ids 00:05:11.920 06:30:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.920 06:30:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:11.920 06:30:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:11.920 06:30:07 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:11.920 06:30:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.920 06:30:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:11.920 06:30:07 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:11.920 06:30:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.920 06:30:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.920 06:30:07 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:11.920 06:30:07 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:11.920 06:30:07 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:11.920 06:30:07 -- setup/hugepages.sh@73 -- # return 0 00:05:11.920 06:30:07 -- setup/hugepages.sh@198 -- # setup output 00:05:11.921 06:30:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.921 06:30:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.182 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:12.182 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:12.182 06:30:07 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:12.182 06:30:07 -- setup/hugepages.sh@89 -- # local node 00:05:12.182 06:30:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.182 06:30:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.182 06:30:07 -- setup/hugepages.sh@92 -- # local surp 00:05:12.182 06:30:07 -- setup/hugepages.sh@93 -- # local resv 00:05:12.182 06:30:07 -- setup/hugepages.sh@94 -- # local anon 00:05:12.182 06:30:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.182 06:30:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.182 06:30:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.182 06:30:07 -- setup/common.sh@18 -- # local node= 00:05:12.182 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:12.182 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.182 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.182 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.182 06:30:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.182 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.182 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6900600 kB' 'MemAvailable: 9413408 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 460472 kB' 'Inactive: 2378296 kB' 'Active(anon): 128936 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120068 kB' 'Mapped: 53784 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180024 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99768 kB' 'KernelStack: 6680 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.183 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.183 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.184 06:30:07 -- setup/common.sh@33 -- # echo 0 00:05:12.184 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:12.184 06:30:07 -- setup/hugepages.sh@97 -- # anon=0 00:05:12.184 06:30:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.184 06:30:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.184 06:30:07 -- setup/common.sh@18 -- # local node= 00:05:12.184 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:12.184 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.184 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.184 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.184 06:30:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.184 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.184 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6900868 kB' 'MemAvailable: 9413676 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 460248 kB' 'Inactive: 2378296 kB' 'Active(anon): 128712 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119760 kB' 'Mapped: 53644 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180028 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99772 kB' 'KernelStack: 6720 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.184 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.184 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.185 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.185 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.186 06:30:07 -- setup/common.sh@33 -- # echo 0 00:05:12.186 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:12.186 06:30:07 -- setup/hugepages.sh@99 -- # surp=0 00:05:12.186 06:30:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.186 06:30:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.186 06:30:07 -- setup/common.sh@18 -- # local node= 00:05:12.186 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:12.186 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.186 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.186 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.186 06:30:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.186 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.186 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6900916 kB' 'MemAvailable: 9413724 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 460008 kB' 'Inactive: 2378296 kB' 'Active(anon): 128472 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119560 kB' 'Mapped: 53644 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180024 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99768 kB' 'KernelStack: 6688 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.186 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.186 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.187 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.187 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.188 06:30:07 -- setup/common.sh@33 -- # echo 0 00:05:12.188 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:12.188 nr_hugepages=1024 00:05:12.188 resv_hugepages=0 00:05:12.188 surplus_hugepages=0 00:05:12.188 anon_hugepages=0 00:05:12.188 06:30:07 -- setup/hugepages.sh@100 -- # resv=0 00:05:12.188 06:30:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:12.188 06:30:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.188 06:30:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.188 06:30:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.188 06:30:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.188 06:30:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:12.188 06:30:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.188 06:30:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.188 06:30:07 -- setup/common.sh@18 -- # local node= 00:05:12.188 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:12.188 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.188 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.188 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.188 06:30:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.188 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.188 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6901168 kB' 'MemAvailable: 9413976 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 459988 kB' 'Inactive: 2378296 kB' 'Active(anon): 128452 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119544 kB' 'Mapped: 53644 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180024 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99768 kB' 'KernelStack: 6688 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.188 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.188 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.189 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.189 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.450 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.450 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.450 06:30:07 -- setup/common.sh@33 -- # echo 1024 00:05:12.450 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:12.450 06:30:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.450 06:30:07 -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.450 06:30:07 -- setup/hugepages.sh@27 -- # local node 00:05:12.450 06:30:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.450 06:30:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.450 06:30:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:12.450 06:30:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.450 06:30:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.450 06:30:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.450 06:30:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.450 06:30:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.451 06:30:07 -- setup/common.sh@18 -- # local node=0 00:05:12.451 06:30:07 -- setup/common.sh@19 -- # local var val 00:05:12.451 06:30:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.451 06:30:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.451 06:30:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.451 06:30:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.451 06:30:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.451 06:30:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6901168 kB' 'MemUsed: 5337944 kB' 'SwapCached: 0 kB' 'Active: 460076 kB' 'Inactive: 2378296 kB' 'Active(anon): 128540 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 2720320 kB' 'Mapped: 53644 kB' 'AnonPages: 119628 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80256 kB' 'Slab: 180024 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.451 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.451 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # continue 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.452 06:30:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.452 06:30:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.452 06:30:07 -- setup/common.sh@33 -- # echo 0 00:05:12.452 06:30:07 -- setup/common.sh@33 -- # return 0 00:05:12.452 node0=1024 expecting 1024 00:05:12.452 06:30:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.452 06:30:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.452 06:30:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.452 06:30:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.452 06:30:07 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:12.452 06:30:07 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:12.452 06:30:07 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:12.452 06:30:07 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:12.452 06:30:07 -- setup/hugepages.sh@202 -- # setup output 00:05:12.452 06:30:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.452 06:30:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.714 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.714 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:12.714 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:12.714 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:12.714 06:30:08 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:12.714 06:30:08 -- setup/hugepages.sh@89 -- # local node 00:05:12.714 06:30:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.714 06:30:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.714 06:30:08 -- setup/hugepages.sh@92 -- # local surp 00:05:12.714 06:30:08 -- setup/hugepages.sh@93 -- # local resv 00:05:12.714 06:30:08 -- setup/hugepages.sh@94 -- # local anon 00:05:12.714 06:30:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.714 06:30:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.714 06:30:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.714 06:30:08 -- setup/common.sh@18 -- # local node= 00:05:12.714 06:30:08 -- setup/common.sh@19 -- # local var val 00:05:12.714 06:30:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.714 06:30:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.714 06:30:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.714 06:30:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.714 06:30:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.714 06:30:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.714 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.714 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6898140 kB' 'MemAvailable: 9410948 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 460352 kB' 'Inactive: 2378296 kB' 'Active(anon): 128816 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119924 kB' 'Mapped: 53728 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180036 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99780 kB' 'KernelStack: 6680 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 06:30:08 -- setup/common.sh@33 -- # echo 0 00:05:12.716 06:30:08 -- setup/common.sh@33 -- # return 0 00:05:12.716 06:30:08 -- setup/hugepages.sh@97 -- # anon=0 00:05:12.716 06:30:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.716 06:30:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.716 06:30:08 -- setup/common.sh@18 -- # local node= 00:05:12.716 06:30:08 -- setup/common.sh@19 -- # local var val 00:05:12.716 06:30:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.716 06:30:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.716 06:30:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.716 06:30:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.716 06:30:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.716 06:30:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6899660 kB' 'MemAvailable: 9412468 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 457756 kB' 'Inactive: 2378296 kB' 'Active(anon): 126220 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117328 kB' 'Mapped: 52948 kB' 'Shmem: 10488 kB' 'KReclaimable: 80256 kB' 'Slab: 180028 kB' 'SReclaimable: 80256 kB' 'SUnreclaim: 99772 kB' 'KernelStack: 6648 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 06:30:08 -- setup/common.sh@33 -- # echo 0 00:05:12.717 06:30:08 -- setup/common.sh@33 -- # return 0 00:05:12.717 06:30:08 -- setup/hugepages.sh@99 -- # surp=0 00:05:12.717 06:30:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.717 06:30:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.717 06:30:08 -- setup/common.sh@18 -- # local node= 00:05:12.717 06:30:08 -- setup/common.sh@19 -- # local var val 00:05:12.717 06:30:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.717 06:30:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.717 06:30:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.717 06:30:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.717 06:30:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.717 06:30:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6899864 kB' 'MemAvailable: 9412668 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 457240 kB' 'Inactive: 2378296 kB' 'Active(anon): 125704 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116824 kB' 'Mapped: 52944 kB' 'Shmem: 10488 kB' 'KReclaimable: 80248 kB' 'Slab: 179940 kB' 'SReclaimable: 80248 kB' 'SUnreclaim: 99692 kB' 'KernelStack: 6568 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.717 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.717 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 06:30:08 -- setup/common.sh@33 -- # echo 0 00:05:12.718 06:30:08 -- setup/common.sh@33 -- # return 0 00:05:12.718 06:30:08 -- setup/hugepages.sh@100 -- # resv=0 00:05:12.718 nr_hugepages=1024 00:05:12.718 06:30:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:12.718 resv_hugepages=0 00:05:12.719 06:30:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.719 surplus_hugepages=0 00:05:12.719 06:30:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.719 anon_hugepages=0 00:05:12.719 06:30:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.719 06:30:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.719 06:30:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:12.719 06:30:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.719 06:30:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.719 06:30:08 -- setup/common.sh@18 -- # local node= 00:05:12.719 06:30:08 -- setup/common.sh@19 -- # local var val 00:05:12.719 06:30:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.719 06:30:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.719 06:30:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.719 06:30:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.719 06:30:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.719 06:30:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6900152 kB' 'MemAvailable: 9412956 kB' 'Buffers: 2684 kB' 'Cached: 2717636 kB' 'SwapCached: 0 kB' 'Active: 457128 kB' 'Inactive: 2378296 kB' 'Active(anon): 125592 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116696 kB' 'Mapped: 52944 kB' 'Shmem: 10488 kB' 'KReclaimable: 80248 kB' 'Slab: 179936 kB' 'SReclaimable: 80248 kB' 'SUnreclaim: 99688 kB' 'KernelStack: 6536 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 06:30:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 06:30:08 -- setup/common.sh@33 -- # echo 1024 00:05:12.720 06:30:08 -- setup/common.sh@33 -- # return 0 00:05:12.720 06:30:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.720 06:30:08 -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.720 06:30:08 -- setup/hugepages.sh@27 -- # local node 00:05:12.720 06:30:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.720 06:30:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.720 06:30:08 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:12.720 06:30:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.720 06:30:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.720 06:30:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.720 06:30:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.720 06:30:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.720 06:30:08 -- setup/common.sh@18 -- # local node=0 00:05:12.720 06:30:08 -- setup/common.sh@19 -- # local var val 00:05:12.720 06:30:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.720 06:30:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.720 06:30:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.720 06:30:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.720 06:30:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.720 06:30:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6900152 kB' 'MemUsed: 5338960 kB' 'SwapCached: 0 kB' 'Active: 457356 kB' 'Inactive: 2378296 kB' 'Active(anon): 125820 kB' 'Inactive(anon): 0 kB' 'Active(file): 331536 kB' 'Inactive(file): 2378296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2720320 kB' 'Mapped: 52944 kB' 'AnonPages: 116928 kB' 'Shmem: 10488 kB' 'KernelStack: 6588 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80248 kB' 'Slab: 179936 kB' 'SReclaimable: 80248 kB' 'SUnreclaim: 99688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.720 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.981 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.981 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # continue 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.982 06:30:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.982 06:30:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.982 06:30:08 -- setup/common.sh@33 -- # echo 0 00:05:12.982 06:30:08 -- setup/common.sh@33 -- # return 0 00:05:12.982 06:30:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.982 06:30:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.982 06:30:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.982 06:30:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.982 node0=1024 expecting 1024 00:05:12.982 06:30:08 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:12.982 06:30:08 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:12.982 00:05:12.982 real 0m1.047s 00:05:12.982 user 0m0.526s 00:05:12.982 sys 0m0.557s 00:05:12.982 06:30:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.982 06:30:08 -- common/autotest_common.sh@10 -- # set +x 00:05:12.982 ************************************ 00:05:12.982 END TEST no_shrink_alloc 00:05:12.982 ************************************ 00:05:12.982 06:30:08 -- setup/hugepages.sh@217 -- # clear_hp 00:05:12.982 06:30:08 -- setup/hugepages.sh@37 -- # local node hp 00:05:12.982 06:30:08 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:12.982 06:30:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.982 06:30:08 -- setup/hugepages.sh@41 -- # echo 0 00:05:12.982 06:30:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.982 06:30:08 -- setup/hugepages.sh@41 -- # echo 0 00:05:12.982 06:30:08 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:12.982 06:30:08 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:12.982 00:05:12.982 real 0m4.800s 00:05:12.982 user 0m2.327s 00:05:12.982 sys 0m2.482s 00:05:12.982 06:30:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.982 06:30:08 -- common/autotest_common.sh@10 -- # set +x 00:05:12.982 ************************************ 00:05:12.982 END TEST hugepages 00:05:12.982 ************************************ 00:05:12.982 06:30:08 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:12.982 06:30:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.982 06:30:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.982 06:30:08 -- common/autotest_common.sh@10 -- # set +x 00:05:12.982 ************************************ 00:05:12.982 START TEST driver 00:05:12.982 ************************************ 00:05:12.982 06:30:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:12.982 * Looking for test storage... 00:05:12.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:12.982 06:30:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:12.982 06:30:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:12.982 06:30:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:13.242 06:30:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:13.242 06:30:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:13.242 06:30:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:13.242 06:30:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:13.242 06:30:08 -- scripts/common.sh@335 -- # IFS=.-: 00:05:13.242 06:30:08 -- scripts/common.sh@335 -- # read -ra ver1 00:05:13.242 06:30:08 -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.242 06:30:08 -- scripts/common.sh@336 -- # read -ra ver2 00:05:13.243 06:30:08 -- scripts/common.sh@337 -- # local 'op=<' 00:05:13.243 06:30:08 -- scripts/common.sh@339 -- # ver1_l=2 00:05:13.243 06:30:08 -- scripts/common.sh@340 -- # ver2_l=1 00:05:13.243 06:30:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:13.243 06:30:08 -- scripts/common.sh@343 -- # case "$op" in 00:05:13.243 06:30:08 -- scripts/common.sh@344 -- # : 1 00:05:13.243 06:30:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:13.243 06:30:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.243 06:30:08 -- scripts/common.sh@364 -- # decimal 1 00:05:13.243 06:30:08 -- scripts/common.sh@352 -- # local d=1 00:05:13.243 06:30:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.243 06:30:08 -- scripts/common.sh@354 -- # echo 1 00:05:13.243 06:30:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:13.243 06:30:08 -- scripts/common.sh@365 -- # decimal 2 00:05:13.243 06:30:08 -- scripts/common.sh@352 -- # local d=2 00:05:13.243 06:30:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.243 06:30:08 -- scripts/common.sh@354 -- # echo 2 00:05:13.243 06:30:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:13.243 06:30:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:13.243 06:30:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:13.243 06:30:08 -- scripts/common.sh@367 -- # return 0 00:05:13.243 06:30:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.243 06:30:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:13.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.243 --rc genhtml_branch_coverage=1 00:05:13.243 --rc genhtml_function_coverage=1 00:05:13.243 --rc genhtml_legend=1 00:05:13.243 --rc geninfo_all_blocks=1 00:05:13.243 --rc geninfo_unexecuted_blocks=1 00:05:13.243 00:05:13.243 ' 00:05:13.243 06:30:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:13.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.243 --rc genhtml_branch_coverage=1 00:05:13.243 --rc genhtml_function_coverage=1 00:05:13.243 --rc genhtml_legend=1 00:05:13.243 --rc geninfo_all_blocks=1 00:05:13.243 --rc geninfo_unexecuted_blocks=1 00:05:13.243 00:05:13.243 ' 00:05:13.243 06:30:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:13.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.243 --rc genhtml_branch_coverage=1 00:05:13.243 --rc genhtml_function_coverage=1 00:05:13.243 --rc genhtml_legend=1 00:05:13.243 --rc geninfo_all_blocks=1 00:05:13.243 --rc geninfo_unexecuted_blocks=1 00:05:13.243 00:05:13.243 ' 00:05:13.243 06:30:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:13.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.243 --rc genhtml_branch_coverage=1 00:05:13.243 --rc genhtml_function_coverage=1 00:05:13.243 --rc genhtml_legend=1 00:05:13.243 --rc geninfo_all_blocks=1 00:05:13.243 --rc geninfo_unexecuted_blocks=1 00:05:13.243 00:05:13.243 ' 00:05:13.243 06:30:08 -- setup/driver.sh@68 -- # setup reset 00:05:13.243 06:30:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.243 06:30:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.812 06:30:09 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:13.812 06:30:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.812 06:30:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.812 06:30:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.812 ************************************ 00:05:13.812 START TEST guess_driver 00:05:13.812 ************************************ 00:05:13.812 06:30:09 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:13.812 06:30:09 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:13.812 06:30:09 -- setup/driver.sh@47 -- # local fail=0 00:05:13.812 06:30:09 -- setup/driver.sh@49 -- # pick_driver 00:05:13.812 06:30:09 -- setup/driver.sh@36 -- # vfio 00:05:13.812 06:30:09 -- setup/driver.sh@21 -- # local iommu_grups 00:05:13.812 06:30:09 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:13.812 06:30:09 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:13.812 06:30:09 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:13.812 06:30:09 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:13.812 06:30:09 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:13.812 06:30:09 -- setup/driver.sh@32 -- # return 1 00:05:13.812 06:30:09 -- setup/driver.sh@38 -- # uio 00:05:13.812 06:30:09 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:13.812 06:30:09 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:13.812 06:30:09 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:13.812 06:30:09 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:13.812 06:30:09 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:13.812 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:13.812 06:30:09 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:13.812 06:30:09 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:13.812 06:30:09 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:13.812 Looking for driver=uio_pci_generic 00:05:13.812 06:30:09 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:13.812 06:30:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.812 06:30:09 -- setup/driver.sh@45 -- # setup output config 00:05:13.812 06:30:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.812 06:30:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.381 06:30:09 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:14.381 06:30:09 -- setup/driver.sh@58 -- # continue 00:05:14.381 06:30:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.381 06:30:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.381 06:30:09 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:14.381 06:30:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.381 06:30:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.381 06:30:09 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:14.381 06:30:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.641 06:30:09 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:14.641 06:30:09 -- setup/driver.sh@65 -- # setup reset 00:05:14.641 06:30:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.641 06:30:09 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.210 ************************************ 00:05:15.210 END TEST guess_driver 00:05:15.210 ************************************ 00:05:15.210 00:05:15.210 real 0m1.411s 00:05:15.210 user 0m0.555s 00:05:15.210 sys 0m0.857s 00:05:15.210 06:30:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.210 06:30:10 -- common/autotest_common.sh@10 -- # set +x 00:05:15.210 00:05:15.210 real 0m2.183s 00:05:15.210 user 0m0.867s 00:05:15.210 sys 0m1.388s 00:05:15.210 06:30:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.210 06:30:10 -- common/autotest_common.sh@10 -- # set +x 00:05:15.210 ************************************ 00:05:15.210 END TEST driver 00:05:15.210 ************************************ 00:05:15.210 06:30:10 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:15.210 06:30:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.210 06:30:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.210 06:30:10 -- common/autotest_common.sh@10 -- # set +x 00:05:15.210 ************************************ 00:05:15.210 START TEST devices 00:05:15.210 ************************************ 00:05:15.210 06:30:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:15.210 * Looking for test storage... 00:05:15.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:15.210 06:30:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:15.210 06:30:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:15.210 06:30:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:15.470 06:30:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:15.470 06:30:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:15.470 06:30:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:15.470 06:30:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:15.470 06:30:10 -- scripts/common.sh@335 -- # IFS=.-: 00:05:15.471 06:30:10 -- scripts/common.sh@335 -- # read -ra ver1 00:05:15.471 06:30:10 -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.471 06:30:10 -- scripts/common.sh@336 -- # read -ra ver2 00:05:15.471 06:30:10 -- scripts/common.sh@337 -- # local 'op=<' 00:05:15.471 06:30:10 -- scripts/common.sh@339 -- # ver1_l=2 00:05:15.471 06:30:10 -- scripts/common.sh@340 -- # ver2_l=1 00:05:15.471 06:30:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:15.471 06:30:10 -- scripts/common.sh@343 -- # case "$op" in 00:05:15.471 06:30:10 -- scripts/common.sh@344 -- # : 1 00:05:15.471 06:30:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:15.471 06:30:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.471 06:30:10 -- scripts/common.sh@364 -- # decimal 1 00:05:15.471 06:30:10 -- scripts/common.sh@352 -- # local d=1 00:05:15.471 06:30:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.471 06:30:10 -- scripts/common.sh@354 -- # echo 1 00:05:15.471 06:30:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:15.471 06:30:10 -- scripts/common.sh@365 -- # decimal 2 00:05:15.471 06:30:10 -- scripts/common.sh@352 -- # local d=2 00:05:15.471 06:30:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.471 06:30:10 -- scripts/common.sh@354 -- # echo 2 00:05:15.471 06:30:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:15.471 06:30:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:15.471 06:30:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:15.471 06:30:10 -- scripts/common.sh@367 -- # return 0 00:05:15.471 06:30:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.471 06:30:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:15.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.471 --rc genhtml_branch_coverage=1 00:05:15.471 --rc genhtml_function_coverage=1 00:05:15.471 --rc genhtml_legend=1 00:05:15.471 --rc geninfo_all_blocks=1 00:05:15.471 --rc geninfo_unexecuted_blocks=1 00:05:15.471 00:05:15.471 ' 00:05:15.471 06:30:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:15.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.471 --rc genhtml_branch_coverage=1 00:05:15.471 --rc genhtml_function_coverage=1 00:05:15.471 --rc genhtml_legend=1 00:05:15.471 --rc geninfo_all_blocks=1 00:05:15.471 --rc geninfo_unexecuted_blocks=1 00:05:15.471 00:05:15.471 ' 00:05:15.471 06:30:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:15.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.471 --rc genhtml_branch_coverage=1 00:05:15.471 --rc genhtml_function_coverage=1 00:05:15.471 --rc genhtml_legend=1 00:05:15.471 --rc geninfo_all_blocks=1 00:05:15.471 --rc geninfo_unexecuted_blocks=1 00:05:15.471 00:05:15.471 ' 00:05:15.471 06:30:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:15.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.471 --rc genhtml_branch_coverage=1 00:05:15.471 --rc genhtml_function_coverage=1 00:05:15.471 --rc genhtml_legend=1 00:05:15.471 --rc geninfo_all_blocks=1 00:05:15.471 --rc geninfo_unexecuted_blocks=1 00:05:15.471 00:05:15.471 ' 00:05:15.471 06:30:10 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:15.471 06:30:10 -- setup/devices.sh@192 -- # setup reset 00:05:15.471 06:30:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:15.471 06:30:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.039 06:30:11 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:16.039 06:30:11 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:16.039 06:30:11 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:16.039 06:30:11 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:16.039 06:30:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:16.039 06:30:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:16.039 06:30:11 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:16.039 06:30:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:16.039 06:30:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:16.039 06:30:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:16.039 06:30:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:16.039 06:30:11 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:16.039 06:30:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:16.040 06:30:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:16.040 06:30:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:16.040 06:30:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:16.040 06:30:11 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:16.040 06:30:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:16.040 06:30:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:16.040 06:30:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:16.040 06:30:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:16.040 06:30:11 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:16.040 06:30:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:16.040 06:30:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:16.040 06:30:11 -- setup/devices.sh@196 -- # blocks=() 00:05:16.040 06:30:11 -- setup/devices.sh@196 -- # declare -a blocks 00:05:16.040 06:30:11 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:16.040 06:30:11 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:16.040 06:30:11 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:16.040 06:30:11 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:16.040 06:30:11 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:16.040 06:30:11 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:16.040 06:30:11 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:16.040 06:30:11 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:16.040 06:30:11 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:16.040 06:30:11 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:16.040 06:30:11 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:16.040 No valid GPT data, bailing 00:05:16.040 06:30:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:16.040 06:30:11 -- scripts/common.sh@393 -- # pt= 00:05:16.040 06:30:11 -- scripts/common.sh@394 -- # return 1 00:05:16.040 06:30:11 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:16.040 06:30:11 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:16.040 06:30:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:16.040 06:30:11 -- setup/common.sh@80 -- # echo 5368709120 00:05:16.040 06:30:11 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:16.040 06:30:11 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:16.040 06:30:11 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:16.040 06:30:11 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:16.040 06:30:11 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:16.040 06:30:11 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:16.040 06:30:11 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:16.040 06:30:11 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:16.040 06:30:11 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:16.040 06:30:11 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:16.040 06:30:11 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:16.299 No valid GPT data, bailing 00:05:16.299 06:30:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:16.299 06:30:11 -- scripts/common.sh@393 -- # pt= 00:05:16.299 06:30:11 -- scripts/common.sh@394 -- # return 1 00:05:16.299 06:30:11 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:16.299 06:30:11 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:16.299 06:30:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:16.299 06:30:11 -- setup/common.sh@80 -- # echo 4294967296 00:05:16.299 06:30:11 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:16.299 06:30:11 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:16.299 06:30:11 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:16.299 06:30:11 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:16.299 06:30:11 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:16.299 06:30:11 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:16.299 06:30:11 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:16.299 06:30:11 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:16.299 06:30:11 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:16.299 06:30:11 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:16.299 06:30:11 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:16.299 No valid GPT data, bailing 00:05:16.299 06:30:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:16.299 06:30:11 -- scripts/common.sh@393 -- # pt= 00:05:16.299 06:30:11 -- scripts/common.sh@394 -- # return 1 00:05:16.299 06:30:11 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:16.299 06:30:11 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:16.299 06:30:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:16.299 06:30:11 -- setup/common.sh@80 -- # echo 4294967296 00:05:16.299 06:30:11 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:16.299 06:30:11 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:16.299 06:30:11 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:16.299 06:30:11 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:16.299 06:30:11 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:16.299 06:30:11 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:16.299 06:30:11 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:16.299 06:30:11 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:16.299 06:30:11 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:16.299 06:30:11 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:16.299 06:30:11 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:16.299 No valid GPT data, bailing 00:05:16.299 06:30:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:16.299 06:30:11 -- scripts/common.sh@393 -- # pt= 00:05:16.299 06:30:11 -- scripts/common.sh@394 -- # return 1 00:05:16.299 06:30:11 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:16.299 06:30:11 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:16.299 06:30:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:16.299 06:30:11 -- setup/common.sh@80 -- # echo 4294967296 00:05:16.299 06:30:11 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:16.299 06:30:11 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:16.299 06:30:11 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:16.299 06:30:11 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:16.299 06:30:11 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:16.299 06:30:11 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:16.299 06:30:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.299 06:30:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.299 06:30:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.299 ************************************ 00:05:16.299 START TEST nvme_mount 00:05:16.299 ************************************ 00:05:16.299 06:30:11 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:16.299 06:30:11 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:16.299 06:30:11 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:16.299 06:30:11 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.299 06:30:11 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:16.299 06:30:11 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:16.299 06:30:11 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:16.299 06:30:11 -- setup/common.sh@40 -- # local part_no=1 00:05:16.299 06:30:11 -- setup/common.sh@41 -- # local size=1073741824 00:05:16.299 06:30:11 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:16.299 06:30:11 -- setup/common.sh@44 -- # parts=() 00:05:16.299 06:30:11 -- setup/common.sh@44 -- # local parts 00:05:16.299 06:30:11 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:16.299 06:30:11 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.299 06:30:11 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:16.299 06:30:11 -- setup/common.sh@46 -- # (( part++ )) 00:05:16.299 06:30:11 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.299 06:30:11 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:16.299 06:30:11 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:16.299 06:30:11 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:17.678 Creating new GPT entries in memory. 00:05:17.678 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:17.678 other utilities. 00:05:17.678 06:30:12 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:17.678 06:30:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.678 06:30:12 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:17.678 06:30:12 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:17.678 06:30:12 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:18.612 Creating new GPT entries in memory. 00:05:18.612 The operation has completed successfully. 00:05:18.612 06:30:13 -- setup/common.sh@57 -- # (( part++ )) 00:05:18.612 06:30:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.612 06:30:13 -- setup/common.sh@62 -- # wait 63874 00:05:18.612 06:30:13 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.612 06:30:13 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:18.612 06:30:13 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.612 06:30:13 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:18.612 06:30:13 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:18.612 06:30:13 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.612 06:30:13 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:18.612 06:30:13 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:18.612 06:30:13 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:18.612 06:30:13 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.612 06:30:13 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:18.612 06:30:13 -- setup/devices.sh@53 -- # local found=0 00:05:18.612 06:30:13 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:18.612 06:30:13 -- setup/devices.sh@56 -- # : 00:05:18.612 06:30:13 -- setup/devices.sh@59 -- # local pci status 00:05:18.612 06:30:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.612 06:30:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:18.612 06:30:13 -- setup/devices.sh@47 -- # setup output config 00:05:18.612 06:30:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.612 06:30:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:18.612 06:30:14 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:18.612 06:30:14 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:18.612 06:30:14 -- setup/devices.sh@63 -- # found=1 00:05:18.613 06:30:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.613 06:30:14 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:18.613 06:30:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.179 06:30:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:19.179 06:30:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.179 06:30:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:19.179 06:30:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.179 06:30:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.179 06:30:14 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:19.179 06:30:14 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.179 06:30:14 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:19.179 06:30:14 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:19.179 06:30:14 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:19.179 06:30:14 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.179 06:30:14 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.179 06:30:14 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:19.179 06:30:14 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:19.179 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:19.179 06:30:14 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:19.179 06:30:14 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:19.437 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:19.437 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:19.437 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:19.437 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:19.437 06:30:14 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:19.437 06:30:14 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:19.437 06:30:14 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.437 06:30:14 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:19.437 06:30:14 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:19.437 06:30:14 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.437 06:30:14 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:19.437 06:30:14 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:19.437 06:30:14 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:19.437 06:30:14 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.437 06:30:14 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:19.437 06:30:14 -- setup/devices.sh@53 -- # local found=0 00:05:19.437 06:30:14 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:19.437 06:30:14 -- setup/devices.sh@56 -- # : 00:05:19.437 06:30:14 -- setup/devices.sh@59 -- # local pci status 00:05:19.437 06:30:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.437 06:30:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:19.437 06:30:14 -- setup/devices.sh@47 -- # setup output config 00:05:19.437 06:30:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.437 06:30:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:19.695 06:30:15 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:19.695 06:30:15 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:19.695 06:30:15 -- setup/devices.sh@63 -- # found=1 00:05:19.695 06:30:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.695 06:30:15 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:19.695 06:30:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.952 06:30:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:19.952 06:30:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.952 06:30:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:19.952 06:30:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.209 06:30:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:20.209 06:30:15 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:20.209 06:30:15 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.209 06:30:15 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:20.209 06:30:15 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:20.209 06:30:15 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.209 06:30:15 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:20.209 06:30:15 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:20.209 06:30:15 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:20.209 06:30:15 -- setup/devices.sh@50 -- # local mount_point= 00:05:20.209 06:30:15 -- setup/devices.sh@51 -- # local test_file= 00:05:20.209 06:30:15 -- setup/devices.sh@53 -- # local found=0 00:05:20.209 06:30:15 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:20.209 06:30:15 -- setup/devices.sh@59 -- # local pci status 00:05:20.209 06:30:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.209 06:30:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:20.209 06:30:15 -- setup/devices.sh@47 -- # setup output config 00:05:20.209 06:30:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.209 06:30:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:20.466 06:30:15 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:20.466 06:30:15 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:20.466 06:30:15 -- setup/devices.sh@63 -- # found=1 00:05:20.466 06:30:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.466 06:30:15 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:20.466 06:30:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.724 06:30:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:20.724 06:30:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.724 06:30:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:20.724 06:30:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.724 06:30:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:20.724 06:30:16 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:20.724 06:30:16 -- setup/devices.sh@68 -- # return 0 00:05:20.724 06:30:16 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:20.724 06:30:16 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.724 06:30:16 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:20.724 06:30:16 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:20.724 06:30:16 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:20.981 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:20.981 00:05:20.981 real 0m4.454s 00:05:20.981 user 0m1.012s 00:05:20.981 sys 0m1.119s 00:05:20.981 06:30:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.981 06:30:16 -- common/autotest_common.sh@10 -- # set +x 00:05:20.981 ************************************ 00:05:20.981 END TEST nvme_mount 00:05:20.981 ************************************ 00:05:20.981 06:30:16 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:20.981 06:30:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.981 06:30:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.981 06:30:16 -- common/autotest_common.sh@10 -- # set +x 00:05:20.981 ************************************ 00:05:20.981 START TEST dm_mount 00:05:20.981 ************************************ 00:05:20.981 06:30:16 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:20.981 06:30:16 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:20.981 06:30:16 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:20.981 06:30:16 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:20.981 06:30:16 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:20.981 06:30:16 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:20.981 06:30:16 -- setup/common.sh@40 -- # local part_no=2 00:05:20.981 06:30:16 -- setup/common.sh@41 -- # local size=1073741824 00:05:20.981 06:30:16 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:20.981 06:30:16 -- setup/common.sh@44 -- # parts=() 00:05:20.981 06:30:16 -- setup/common.sh@44 -- # local parts 00:05:20.981 06:30:16 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:20.981 06:30:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:20.981 06:30:16 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:20.981 06:30:16 -- setup/common.sh@46 -- # (( part++ )) 00:05:20.981 06:30:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:20.981 06:30:16 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:20.981 06:30:16 -- setup/common.sh@46 -- # (( part++ )) 00:05:20.981 06:30:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:20.981 06:30:16 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:20.981 06:30:16 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:20.981 06:30:16 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:21.913 Creating new GPT entries in memory. 00:05:21.913 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:21.913 other utilities. 00:05:21.913 06:30:17 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:21.913 06:30:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:21.913 06:30:17 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:21.913 06:30:17 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:21.913 06:30:17 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:22.849 Creating new GPT entries in memory. 00:05:22.849 The operation has completed successfully. 00:05:22.849 06:30:18 -- setup/common.sh@57 -- # (( part++ )) 00:05:22.849 06:30:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:22.849 06:30:18 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:22.849 06:30:18 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:22.849 06:30:18 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:24.251 The operation has completed successfully. 00:05:24.251 06:30:19 -- setup/common.sh@57 -- # (( part++ )) 00:05:24.251 06:30:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.251 06:30:19 -- setup/common.sh@62 -- # wait 64335 00:05:24.251 06:30:19 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:24.251 06:30:19 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:24.251 06:30:19 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:24.251 06:30:19 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:24.251 06:30:19 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:24.251 06:30:19 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:24.251 06:30:19 -- setup/devices.sh@161 -- # break 00:05:24.251 06:30:19 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:24.251 06:30:19 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:24.251 06:30:19 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:24.251 06:30:19 -- setup/devices.sh@166 -- # dm=dm-0 00:05:24.251 06:30:19 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:24.251 06:30:19 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:24.251 06:30:19 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:24.251 06:30:19 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:24.251 06:30:19 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:24.251 06:30:19 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:24.251 06:30:19 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:24.251 06:30:19 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:24.251 06:30:19 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:24.251 06:30:19 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:24.251 06:30:19 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:24.251 06:30:19 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:24.251 06:30:19 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:24.251 06:30:19 -- setup/devices.sh@53 -- # local found=0 00:05:24.251 06:30:19 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:24.251 06:30:19 -- setup/devices.sh@56 -- # : 00:05:24.251 06:30:19 -- setup/devices.sh@59 -- # local pci status 00:05:24.251 06:30:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:24.251 06:30:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.251 06:30:19 -- setup/devices.sh@47 -- # setup output config 00:05:24.251 06:30:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.251 06:30:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:24.251 06:30:19 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:24.251 06:30:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:24.251 06:30:19 -- setup/devices.sh@63 -- # found=1 00:05:24.251 06:30:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.251 06:30:19 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:24.251 06:30:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.510 06:30:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:24.510 06:30:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.769 06:30:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:24.769 06:30:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.769 06:30:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:24.769 06:30:20 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:24.769 06:30:20 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:24.769 06:30:20 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:24.769 06:30:20 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:24.770 06:30:20 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:24.770 06:30:20 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:24.770 06:30:20 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:24.770 06:30:20 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:24.770 06:30:20 -- setup/devices.sh@50 -- # local mount_point= 00:05:24.770 06:30:20 -- setup/devices.sh@51 -- # local test_file= 00:05:24.770 06:30:20 -- setup/devices.sh@53 -- # local found=0 00:05:24.770 06:30:20 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:24.770 06:30:20 -- setup/devices.sh@59 -- # local pci status 00:05:24.770 06:30:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.770 06:30:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:24.770 06:30:20 -- setup/devices.sh@47 -- # setup output config 00:05:24.770 06:30:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.770 06:30:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:25.028 06:30:20 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:25.028 06:30:20 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:25.028 06:30:20 -- setup/devices.sh@63 -- # found=1 00:05:25.028 06:30:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.028 06:30:20 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:25.028 06:30:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.287 06:30:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:25.287 06:30:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.287 06:30:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:25.287 06:30:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.287 06:30:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.287 06:30:20 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:25.287 06:30:20 -- setup/devices.sh@68 -- # return 0 00:05:25.287 06:30:20 -- setup/devices.sh@187 -- # cleanup_dm 00:05:25.287 06:30:20 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:25.287 06:30:20 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:25.287 06:30:20 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:25.546 06:30:20 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.546 06:30:20 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:25.546 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:25.546 06:30:20 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:25.546 06:30:20 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:25.546 00:05:25.546 real 0m4.530s 00:05:25.546 user 0m0.690s 00:05:25.546 sys 0m0.764s 00:05:25.546 06:30:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.546 06:30:20 -- common/autotest_common.sh@10 -- # set +x 00:05:25.546 ************************************ 00:05:25.546 END TEST dm_mount 00:05:25.546 ************************************ 00:05:25.546 06:30:20 -- setup/devices.sh@1 -- # cleanup 00:05:25.546 06:30:20 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:25.546 06:30:20 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.546 06:30:20 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.546 06:30:20 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:25.546 06:30:20 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:25.546 06:30:20 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:25.805 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:25.805 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:25.805 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:25.805 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:25.805 06:30:21 -- setup/devices.sh@12 -- # cleanup_dm 00:05:25.805 06:30:21 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:25.805 06:30:21 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:25.805 06:30:21 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.805 06:30:21 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:25.805 06:30:21 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:25.805 06:30:21 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:25.805 00:05:25.805 real 0m10.583s 00:05:25.805 user 0m2.454s 00:05:25.805 sys 0m2.449s 00:05:25.805 06:30:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.805 ************************************ 00:05:25.805 END TEST devices 00:05:25.805 ************************************ 00:05:25.805 06:30:21 -- common/autotest_common.sh@10 -- # set +x 00:05:25.805 00:05:25.805 real 0m22.262s 00:05:25.805 user 0m7.742s 00:05:25.805 sys 0m8.883s 00:05:25.805 06:30:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.805 06:30:21 -- common/autotest_common.sh@10 -- # set +x 00:05:25.805 ************************************ 00:05:25.805 END TEST setup.sh 00:05:25.805 ************************************ 00:05:25.805 06:30:21 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:26.063 Hugepages 00:05:26.063 node hugesize free / total 00:05:26.063 node0 1048576kB 0 / 0 00:05:26.063 node0 2048kB 2048 / 2048 00:05:26.063 00:05:26.063 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:26.063 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:26.063 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:26.321 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:26.321 06:30:21 -- spdk/autotest.sh@128 -- # uname -s 00:05:26.321 06:30:21 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:26.321 06:30:21 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:26.321 06:30:21 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.888 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.888 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.888 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.147 06:30:22 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:28.082 06:30:23 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:28.082 06:30:23 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:28.082 06:30:23 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:28.082 06:30:23 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:28.082 06:30:23 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:28.082 06:30:23 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:28.082 06:30:23 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.082 06:30:23 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:28.082 06:30:23 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:28.082 06:30:23 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:28.082 06:30:23 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:28.082 06:30:23 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:28.341 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.601 Waiting for block devices as requested 00:05:28.601 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:28.601 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:28.601 06:30:24 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:28.601 06:30:24 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:28.601 06:30:24 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:28.601 06:30:24 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:28.601 06:30:24 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:28.601 06:30:24 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:28.601 06:30:24 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:28.601 06:30:24 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:28.601 06:30:24 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:28.601 06:30:24 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:28.601 06:30:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:28.601 06:30:24 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:28.601 06:30:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:28.601 06:30:24 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:28.601 06:30:24 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:28.601 06:30:24 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:28.601 06:30:24 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:28.601 06:30:24 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:28.601 06:30:24 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:28.601 06:30:24 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:28.601 06:30:24 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:28.601 06:30:24 -- common/autotest_common.sh@1552 -- # continue 00:05:28.601 06:30:24 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:28.601 06:30:24 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:28.601 06:30:24 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:28.601 06:30:24 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:28.860 06:30:24 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:28.860 06:30:24 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:28.860 06:30:24 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:28.860 06:30:24 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:28.860 06:30:24 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:28.860 06:30:24 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:28.860 06:30:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:28.860 06:30:24 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:28.860 06:30:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:28.860 06:30:24 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:28.860 06:30:24 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:28.860 06:30:24 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:28.860 06:30:24 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:28.860 06:30:24 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:28.860 06:30:24 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:28.860 06:30:24 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:28.860 06:30:24 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:28.860 06:30:24 -- common/autotest_common.sh@1552 -- # continue 00:05:28.860 06:30:24 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:28.860 06:30:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.860 06:30:24 -- common/autotest_common.sh@10 -- # set +x 00:05:28.860 06:30:24 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:28.860 06:30:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.860 06:30:24 -- common/autotest_common.sh@10 -- # set +x 00:05:28.860 06:30:24 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.428 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.428 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.687 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.687 06:30:25 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:29.687 06:30:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.687 06:30:25 -- common/autotest_common.sh@10 -- # set +x 00:05:29.687 06:30:25 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:29.687 06:30:25 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:29.687 06:30:25 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:29.687 06:30:25 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:29.687 06:30:25 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:29.687 06:30:25 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:29.687 06:30:25 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:29.687 06:30:25 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:29.687 06:30:25 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.687 06:30:25 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:29.687 06:30:25 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:29.687 06:30:25 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:29.687 06:30:25 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:29.687 06:30:25 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:29.687 06:30:25 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:29.687 06:30:25 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:29.687 06:30:25 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:29.687 06:30:25 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:29.687 06:30:25 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:29.687 06:30:25 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:29.687 06:30:25 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:29.687 06:30:25 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:29.687 06:30:25 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:29.688 06:30:25 -- common/autotest_common.sh@1588 -- # return 0 00:05:29.688 06:30:25 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:29.688 06:30:25 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:29.688 06:30:25 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:29.688 06:30:25 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:29.688 06:30:25 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:29.688 06:30:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.688 06:30:25 -- common/autotest_common.sh@10 -- # set +x 00:05:29.688 06:30:25 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:29.688 06:30:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.688 06:30:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.688 06:30:25 -- common/autotest_common.sh@10 -- # set +x 00:05:29.688 ************************************ 00:05:29.688 START TEST env 00:05:29.688 ************************************ 00:05:29.688 06:30:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:29.946 * Looking for test storage... 00:05:29.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:29.946 06:30:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:29.946 06:30:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:29.946 06:30:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:29.946 06:30:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:29.946 06:30:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:29.946 06:30:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:29.946 06:30:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:29.946 06:30:25 -- scripts/common.sh@335 -- # IFS=.-: 00:05:29.946 06:30:25 -- scripts/common.sh@335 -- # read -ra ver1 00:05:29.946 06:30:25 -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.946 06:30:25 -- scripts/common.sh@336 -- # read -ra ver2 00:05:29.946 06:30:25 -- scripts/common.sh@337 -- # local 'op=<' 00:05:29.946 06:30:25 -- scripts/common.sh@339 -- # ver1_l=2 00:05:29.946 06:30:25 -- scripts/common.sh@340 -- # ver2_l=1 00:05:29.946 06:30:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:29.946 06:30:25 -- scripts/common.sh@343 -- # case "$op" in 00:05:29.946 06:30:25 -- scripts/common.sh@344 -- # : 1 00:05:29.946 06:30:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:29.946 06:30:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.946 06:30:25 -- scripts/common.sh@364 -- # decimal 1 00:05:29.946 06:30:25 -- scripts/common.sh@352 -- # local d=1 00:05:29.946 06:30:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.946 06:30:25 -- scripts/common.sh@354 -- # echo 1 00:05:29.946 06:30:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:29.946 06:30:25 -- scripts/common.sh@365 -- # decimal 2 00:05:29.946 06:30:25 -- scripts/common.sh@352 -- # local d=2 00:05:29.946 06:30:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.946 06:30:25 -- scripts/common.sh@354 -- # echo 2 00:05:29.946 06:30:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:29.946 06:30:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:29.946 06:30:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:29.946 06:30:25 -- scripts/common.sh@367 -- # return 0 00:05:29.946 06:30:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.946 06:30:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:29.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.946 --rc genhtml_branch_coverage=1 00:05:29.946 --rc genhtml_function_coverage=1 00:05:29.946 --rc genhtml_legend=1 00:05:29.946 --rc geninfo_all_blocks=1 00:05:29.946 --rc geninfo_unexecuted_blocks=1 00:05:29.946 00:05:29.946 ' 00:05:29.946 06:30:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:29.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.946 --rc genhtml_branch_coverage=1 00:05:29.946 --rc genhtml_function_coverage=1 00:05:29.946 --rc genhtml_legend=1 00:05:29.946 --rc geninfo_all_blocks=1 00:05:29.946 --rc geninfo_unexecuted_blocks=1 00:05:29.946 00:05:29.946 ' 00:05:29.946 06:30:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:29.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.946 --rc genhtml_branch_coverage=1 00:05:29.946 --rc genhtml_function_coverage=1 00:05:29.946 --rc genhtml_legend=1 00:05:29.946 --rc geninfo_all_blocks=1 00:05:29.946 --rc geninfo_unexecuted_blocks=1 00:05:29.946 00:05:29.946 ' 00:05:29.946 06:30:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:29.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.946 --rc genhtml_branch_coverage=1 00:05:29.946 --rc genhtml_function_coverage=1 00:05:29.946 --rc genhtml_legend=1 00:05:29.946 --rc geninfo_all_blocks=1 00:05:29.946 --rc geninfo_unexecuted_blocks=1 00:05:29.946 00:05:29.946 ' 00:05:29.946 06:30:25 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:29.946 06:30:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.946 06:30:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.946 06:30:25 -- common/autotest_common.sh@10 -- # set +x 00:05:29.946 ************************************ 00:05:29.946 START TEST env_memory 00:05:29.946 ************************************ 00:05:29.946 06:30:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:29.946 00:05:29.946 00:05:29.946 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.946 http://cunit.sourceforge.net/ 00:05:29.946 00:05:29.946 00:05:29.946 Suite: memory 00:05:29.946 Test: alloc and free memory map ...[2024-12-05 06:30:25.366235] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:29.946 passed 00:05:29.946 Test: mem map translation ...[2024-12-05 06:30:25.397662] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:29.946 [2024-12-05 06:30:25.397710] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:29.946 [2024-12-05 06:30:25.397781] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:29.946 [2024-12-05 06:30:25.397803] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:30.204 passed 00:05:30.204 Test: mem map registration ...[2024-12-05 06:30:25.461606] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:30.204 [2024-12-05 06:30:25.461647] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:30.204 passed 00:05:30.204 Test: mem map adjacent registrations ...passed 00:05:30.204 00:05:30.204 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.204 suites 1 1 n/a 0 0 00:05:30.204 tests 4 4 4 0 0 00:05:30.204 asserts 152 152 152 0 n/a 00:05:30.204 00:05:30.204 Elapsed time = 0.213 seconds 00:05:30.205 00:05:30.205 real 0m0.230s 00:05:30.205 user 0m0.213s 00:05:30.205 sys 0m0.012s 00:05:30.205 06:30:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.205 06:30:25 -- common/autotest_common.sh@10 -- # set +x 00:05:30.205 ************************************ 00:05:30.205 END TEST env_memory 00:05:30.205 ************************************ 00:05:30.205 06:30:25 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:30.205 06:30:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.205 06:30:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.205 06:30:25 -- common/autotest_common.sh@10 -- # set +x 00:05:30.205 ************************************ 00:05:30.205 START TEST env_vtophys 00:05:30.205 ************************************ 00:05:30.205 06:30:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:30.205 EAL: lib.eal log level changed from notice to debug 00:05:30.205 EAL: Detected lcore 0 as core 0 on socket 0 00:05:30.205 EAL: Detected lcore 1 as core 0 on socket 0 00:05:30.205 EAL: Detected lcore 2 as core 0 on socket 0 00:05:30.205 EAL: Detected lcore 3 as core 0 on socket 0 00:05:30.205 EAL: Detected lcore 4 as core 0 on socket 0 00:05:30.205 EAL: Detected lcore 5 as core 0 on socket 0 00:05:30.205 EAL: Detected lcore 6 as core 0 on socket 0 00:05:30.205 EAL: Detected lcore 7 as core 0 on socket 0 00:05:30.205 EAL: Detected lcore 8 as core 0 on socket 0 00:05:30.205 EAL: Detected lcore 9 as core 0 on socket 0 00:05:30.205 EAL: Maximum logical cores by configuration: 128 00:05:30.205 EAL: Detected CPU lcores: 10 00:05:30.205 EAL: Detected NUMA nodes: 1 00:05:30.205 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:30.205 EAL: Detected shared linkage of DPDK 00:05:30.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:30.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:30.205 EAL: Registered [vdev] bus. 00:05:30.205 EAL: bus.vdev log level changed from disabled to notice 00:05:30.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:30.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:30.205 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:30.205 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:30.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:30.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:30.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:30.205 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:30.205 EAL: No shared files mode enabled, IPC will be disabled 00:05:30.205 EAL: No shared files mode enabled, IPC is disabled 00:05:30.205 EAL: Selected IOVA mode 'PA' 00:05:30.205 EAL: Probing VFIO support... 00:05:30.205 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:30.205 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:30.205 EAL: Ask a virtual area of 0x2e000 bytes 00:05:30.205 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:30.205 EAL: Setting up physically contiguous memory... 00:05:30.205 EAL: Setting maximum number of open files to 524288 00:05:30.205 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:30.205 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:30.205 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.205 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:30.205 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.205 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.205 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:30.205 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:30.205 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.205 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:30.205 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.205 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.205 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:30.205 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:30.205 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.205 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:30.205 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.205 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.205 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:30.205 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:30.205 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.205 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:30.205 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.205 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.205 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:30.205 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:30.205 EAL: Hugepages will be freed exactly as allocated. 00:05:30.205 EAL: No shared files mode enabled, IPC is disabled 00:05:30.205 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: TSC frequency is ~2200000 KHz 00:05:30.463 EAL: Main lcore 0 is ready (tid=7f6bcf27ba00;cpuset=[0]) 00:05:30.463 EAL: Trying to obtain current memory policy. 00:05:30.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.463 EAL: Restoring previous memory policy: 0 00:05:30.463 EAL: request: mp_malloc_sync 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: Heap on socket 0 was expanded by 2MB 00:05:30.463 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:30.463 EAL: Mem event callback 'spdk:(nil)' registered 00:05:30.463 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:30.463 00:05:30.463 00:05:30.463 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.463 http://cunit.sourceforge.net/ 00:05:30.463 00:05:30.463 00:05:30.463 Suite: components_suite 00:05:30.463 Test: vtophys_malloc_test ...passed 00:05:30.463 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:30.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.463 EAL: Restoring previous memory policy: 4 00:05:30.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.463 EAL: request: mp_malloc_sync 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: Heap on socket 0 was expanded by 4MB 00:05:30.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.463 EAL: request: mp_malloc_sync 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: Heap on socket 0 was shrunk by 4MB 00:05:30.463 EAL: Trying to obtain current memory policy. 00:05:30.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.463 EAL: Restoring previous memory policy: 4 00:05:30.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.463 EAL: request: mp_malloc_sync 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: Heap on socket 0 was expanded by 6MB 00:05:30.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.463 EAL: request: mp_malloc_sync 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: Heap on socket 0 was shrunk by 6MB 00:05:30.463 EAL: Trying to obtain current memory policy. 00:05:30.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.463 EAL: Restoring previous memory policy: 4 00:05:30.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.463 EAL: request: mp_malloc_sync 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: Heap on socket 0 was expanded by 10MB 00:05:30.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.463 EAL: request: mp_malloc_sync 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: Heap on socket 0 was shrunk by 10MB 00:05:30.463 EAL: Trying to obtain current memory policy. 00:05:30.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.463 EAL: Restoring previous memory policy: 4 00:05:30.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.463 EAL: request: mp_malloc_sync 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: Heap on socket 0 was expanded by 18MB 00:05:30.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.463 EAL: request: mp_malloc_sync 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: Heap on socket 0 was shrunk by 18MB 00:05:30.463 EAL: Trying to obtain current memory policy. 00:05:30.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.463 EAL: Restoring previous memory policy: 4 00:05:30.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.463 EAL: request: mp_malloc_sync 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: Heap on socket 0 was expanded by 34MB 00:05:30.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.463 EAL: request: mp_malloc_sync 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: Heap on socket 0 was shrunk by 34MB 00:05:30.463 EAL: Trying to obtain current memory policy. 00:05:30.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.463 EAL: Restoring previous memory policy: 4 00:05:30.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.463 EAL: request: mp_malloc_sync 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: Heap on socket 0 was expanded by 66MB 00:05:30.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.463 EAL: request: mp_malloc_sync 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: Heap on socket 0 was shrunk by 66MB 00:05:30.463 EAL: Trying to obtain current memory policy. 00:05:30.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.463 EAL: Restoring previous memory policy: 4 00:05:30.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.463 EAL: request: mp_malloc_sync 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: Heap on socket 0 was expanded by 130MB 00:05:30.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.463 EAL: request: mp_malloc_sync 00:05:30.463 EAL: No shared files mode enabled, IPC is disabled 00:05:30.463 EAL: Heap on socket 0 was shrunk by 130MB 00:05:30.463 EAL: Trying to obtain current memory policy. 00:05:30.464 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.464 EAL: Restoring previous memory policy: 4 00:05:30.464 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.464 EAL: request: mp_malloc_sync 00:05:30.464 EAL: No shared files mode enabled, IPC is disabled 00:05:30.464 EAL: Heap on socket 0 was expanded by 258MB 00:05:30.464 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.721 EAL: request: mp_malloc_sync 00:05:30.721 EAL: No shared files mode enabled, IPC is disabled 00:05:30.721 EAL: Heap on socket 0 was shrunk by 258MB 00:05:30.721 EAL: Trying to obtain current memory policy. 00:05:30.721 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.721 EAL: Restoring previous memory policy: 4 00:05:30.721 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.721 EAL: request: mp_malloc_sync 00:05:30.721 EAL: No shared files mode enabled, IPC is disabled 00:05:30.721 EAL: Heap on socket 0 was expanded by 514MB 00:05:30.721 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.721 EAL: request: mp_malloc_sync 00:05:30.721 EAL: No shared files mode enabled, IPC is disabled 00:05:30.721 EAL: Heap on socket 0 was shrunk by 514MB 00:05:30.721 EAL: Trying to obtain current memory policy. 00:05:30.721 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.979 EAL: Restoring previous memory policy: 4 00:05:30.979 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.979 EAL: request: mp_malloc_sync 00:05:30.979 EAL: No shared files mode enabled, IPC is disabled 00:05:30.979 EAL: Heap on socket 0 was expanded by 1026MB 00:05:30.979 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.238 passed 00:05:31.238 00:05:31.238 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.238 suites 1 1 n/a 0 0 00:05:31.238 tests 2 2 2 0 0 00:05:31.238 asserts 5274 5274 5274 0 n/a 00:05:31.238 00:05:31.238 Elapsed time = 0.664 seconds 00:05:31.238 EAL: request: mp_malloc_sync 00:05:31.238 EAL: No shared files mode enabled, IPC is disabled 00:05:31.238 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:31.238 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.238 EAL: request: mp_malloc_sync 00:05:31.238 EAL: No shared files mode enabled, IPC is disabled 00:05:31.238 EAL: Heap on socket 0 was shrunk by 2MB 00:05:31.238 EAL: No shared files mode enabled, IPC is disabled 00:05:31.238 EAL: No shared files mode enabled, IPC is disabled 00:05:31.238 EAL: No shared files mode enabled, IPC is disabled 00:05:31.238 00:05:31.238 real 0m0.851s 00:05:31.238 user 0m0.426s 00:05:31.238 sys 0m0.297s 00:05:31.238 06:30:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.238 06:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:31.238 ************************************ 00:05:31.238 END TEST env_vtophys 00:05:31.238 ************************************ 00:05:31.238 06:30:26 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:31.238 06:30:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.238 06:30:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.238 06:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:31.238 ************************************ 00:05:31.238 START TEST env_pci 00:05:31.238 ************************************ 00:05:31.238 06:30:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:31.238 00:05:31.238 00:05:31.238 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.238 http://cunit.sourceforge.net/ 00:05:31.238 00:05:31.238 00:05:31.238 Suite: pci 00:05:31.238 Test: pci_hook ...[2024-12-05 06:30:26.518209] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65468 has claimed it 00:05:31.238 passed 00:05:31.238 00:05:31.238 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.238 suites 1 1 n/a 0 0 00:05:31.238 tests 1 1 1 0 0 00:05:31.238 asserts 25 25 25 0 n/a 00:05:31.238 00:05:31.238 Elapsed time = 0.002 seconds 00:05:31.238 EAL: Cannot find device (10000:00:01.0) 00:05:31.238 EAL: Failed to attach device on primary process 00:05:31.238 00:05:31.238 real 0m0.021s 00:05:31.238 user 0m0.007s 00:05:31.238 sys 0m0.013s 00:05:31.238 06:30:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.238 06:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:31.238 ************************************ 00:05:31.238 END TEST env_pci 00:05:31.238 ************************************ 00:05:31.238 06:30:26 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:31.238 06:30:26 -- env/env.sh@15 -- # uname 00:05:31.238 06:30:26 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:31.238 06:30:26 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:31.238 06:30:26 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.238 06:30:26 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:31.238 06:30:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.238 06:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:31.238 ************************************ 00:05:31.238 START TEST env_dpdk_post_init 00:05:31.238 ************************************ 00:05:31.238 06:30:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.238 EAL: Detected CPU lcores: 10 00:05:31.238 EAL: Detected NUMA nodes: 1 00:05:31.238 EAL: Detected shared linkage of DPDK 00:05:31.238 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.238 EAL: Selected IOVA mode 'PA' 00:05:31.497 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.497 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:31.497 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:31.497 Starting DPDK initialization... 00:05:31.497 Starting SPDK post initialization... 00:05:31.497 SPDK NVMe probe 00:05:31.497 Attaching to 0000:00:06.0 00:05:31.497 Attaching to 0000:00:07.0 00:05:31.497 Attached to 0000:00:06.0 00:05:31.497 Attached to 0000:00:07.0 00:05:31.497 Cleaning up... 00:05:31.497 00:05:31.497 real 0m0.170s 00:05:31.497 user 0m0.038s 00:05:31.497 sys 0m0.032s 00:05:31.497 06:30:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.497 06:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:31.497 ************************************ 00:05:31.497 END TEST env_dpdk_post_init 00:05:31.497 ************************************ 00:05:31.497 06:30:26 -- env/env.sh@26 -- # uname 00:05:31.497 06:30:26 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:31.497 06:30:26 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.497 06:30:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.497 06:30:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.497 06:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:31.497 ************************************ 00:05:31.497 START TEST env_mem_callbacks 00:05:31.497 ************************************ 00:05:31.497 06:30:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.497 EAL: Detected CPU lcores: 10 00:05:31.497 EAL: Detected NUMA nodes: 1 00:05:31.497 EAL: Detected shared linkage of DPDK 00:05:31.497 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.497 EAL: Selected IOVA mode 'PA' 00:05:31.497 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.497 00:05:31.497 00:05:31.497 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.497 http://cunit.sourceforge.net/ 00:05:31.497 00:05:31.497 00:05:31.497 Suite: memory 00:05:31.497 Test: test ... 00:05:31.497 register 0x200000200000 2097152 00:05:31.497 malloc 3145728 00:05:31.497 register 0x200000400000 4194304 00:05:31.497 buf 0x200000500000 len 3145728 PASSED 00:05:31.497 malloc 64 00:05:31.498 buf 0x2000004fff40 len 64 PASSED 00:05:31.498 malloc 4194304 00:05:31.498 register 0x200000800000 6291456 00:05:31.498 buf 0x200000a00000 len 4194304 PASSED 00:05:31.498 free 0x200000500000 3145728 00:05:31.498 free 0x2000004fff40 64 00:05:31.498 unregister 0x200000400000 4194304 PASSED 00:05:31.498 free 0x200000a00000 4194304 00:05:31.498 unregister 0x200000800000 6291456 PASSED 00:05:31.498 malloc 8388608 00:05:31.498 register 0x200000400000 10485760 00:05:31.498 buf 0x200000600000 len 8388608 PASSED 00:05:31.498 free 0x200000600000 8388608 00:05:31.498 unregister 0x200000400000 10485760 PASSED 00:05:31.498 passed 00:05:31.498 00:05:31.498 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.498 suites 1 1 n/a 0 0 00:05:31.498 tests 1 1 1 0 0 00:05:31.498 asserts 15 15 15 0 n/a 00:05:31.498 00:05:31.498 Elapsed time = 0.008 seconds 00:05:31.498 00:05:31.498 real 0m0.141s 00:05:31.498 user 0m0.018s 00:05:31.498 sys 0m0.022s 00:05:31.498 06:30:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.498 06:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:31.498 ************************************ 00:05:31.498 END TEST env_mem_callbacks 00:05:31.498 ************************************ 00:05:31.756 00:05:31.756 real 0m1.860s 00:05:31.756 user 0m0.887s 00:05:31.756 sys 0m0.628s 00:05:31.756 06:30:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.756 06:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:31.756 ************************************ 00:05:31.756 END TEST env 00:05:31.757 ************************************ 00:05:31.757 06:30:27 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:31.757 06:30:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.757 06:30:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.757 06:30:27 -- common/autotest_common.sh@10 -- # set +x 00:05:31.757 ************************************ 00:05:31.757 START TEST rpc 00:05:31.757 ************************************ 00:05:31.757 06:30:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:31.757 * Looking for test storage... 00:05:31.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.757 06:30:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:31.757 06:30:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:31.757 06:30:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:31.757 06:30:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:31.757 06:30:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:31.757 06:30:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:31.757 06:30:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:31.757 06:30:27 -- scripts/common.sh@335 -- # IFS=.-: 00:05:31.757 06:30:27 -- scripts/common.sh@335 -- # read -ra ver1 00:05:31.757 06:30:27 -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.757 06:30:27 -- scripts/common.sh@336 -- # read -ra ver2 00:05:31.757 06:30:27 -- scripts/common.sh@337 -- # local 'op=<' 00:05:31.757 06:30:27 -- scripts/common.sh@339 -- # ver1_l=2 00:05:31.757 06:30:27 -- scripts/common.sh@340 -- # ver2_l=1 00:05:31.757 06:30:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:31.757 06:30:27 -- scripts/common.sh@343 -- # case "$op" in 00:05:31.757 06:30:27 -- scripts/common.sh@344 -- # : 1 00:05:31.757 06:30:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:31.757 06:30:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.757 06:30:27 -- scripts/common.sh@364 -- # decimal 1 00:05:31.757 06:30:27 -- scripts/common.sh@352 -- # local d=1 00:05:31.757 06:30:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.757 06:30:27 -- scripts/common.sh@354 -- # echo 1 00:05:32.016 06:30:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:32.016 06:30:27 -- scripts/common.sh@365 -- # decimal 2 00:05:32.016 06:30:27 -- scripts/common.sh@352 -- # local d=2 00:05:32.016 06:30:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.016 06:30:27 -- scripts/common.sh@354 -- # echo 2 00:05:32.016 06:30:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:32.016 06:30:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:32.016 06:30:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:32.016 06:30:27 -- scripts/common.sh@367 -- # return 0 00:05:32.016 06:30:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.016 06:30:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:32.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.016 --rc genhtml_branch_coverage=1 00:05:32.016 --rc genhtml_function_coverage=1 00:05:32.016 --rc genhtml_legend=1 00:05:32.016 --rc geninfo_all_blocks=1 00:05:32.016 --rc geninfo_unexecuted_blocks=1 00:05:32.016 00:05:32.016 ' 00:05:32.016 06:30:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:32.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.016 --rc genhtml_branch_coverage=1 00:05:32.016 --rc genhtml_function_coverage=1 00:05:32.016 --rc genhtml_legend=1 00:05:32.016 --rc geninfo_all_blocks=1 00:05:32.016 --rc geninfo_unexecuted_blocks=1 00:05:32.016 00:05:32.016 ' 00:05:32.016 06:30:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:32.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.016 --rc genhtml_branch_coverage=1 00:05:32.016 --rc genhtml_function_coverage=1 00:05:32.016 --rc genhtml_legend=1 00:05:32.016 --rc geninfo_all_blocks=1 00:05:32.016 --rc geninfo_unexecuted_blocks=1 00:05:32.016 00:05:32.016 ' 00:05:32.016 06:30:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:32.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.016 --rc genhtml_branch_coverage=1 00:05:32.016 --rc genhtml_function_coverage=1 00:05:32.016 --rc genhtml_legend=1 00:05:32.016 --rc geninfo_all_blocks=1 00:05:32.016 --rc geninfo_unexecuted_blocks=1 00:05:32.016 00:05:32.016 ' 00:05:32.016 06:30:27 -- rpc/rpc.sh@65 -- # spdk_pid=65590 00:05:32.016 06:30:27 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:32.016 06:30:27 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.016 06:30:27 -- rpc/rpc.sh@67 -- # waitforlisten 65590 00:05:32.016 06:30:27 -- common/autotest_common.sh@829 -- # '[' -z 65590 ']' 00:05:32.016 06:30:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.016 06:30:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.016 06:30:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.016 06:30:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.016 06:30:27 -- common/autotest_common.sh@10 -- # set +x 00:05:32.016 [2024-12-05 06:30:27.297643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:32.016 [2024-12-05 06:30:27.297764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65590 ] 00:05:32.016 [2024-12-05 06:30:27.435321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.016 [2024-12-05 06:30:27.468671] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.016 [2024-12-05 06:30:27.468848] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:32.016 [2024-12-05 06:30:27.468861] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65590' to capture a snapshot of events at runtime. 00:05:32.016 [2024-12-05 06:30:27.468869] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65590 for offline analysis/debug. 00:05:32.016 [2024-12-05 06:30:27.468898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.953 06:30:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.953 06:30:28 -- common/autotest_common.sh@862 -- # return 0 00:05:32.953 06:30:28 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:32.953 06:30:28 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:32.953 06:30:28 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:32.953 06:30:28 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:32.953 06:30:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.953 06:30:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.953 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:32.953 ************************************ 00:05:32.953 START TEST rpc_integrity 00:05:32.953 ************************************ 00:05:32.953 06:30:28 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:32.953 06:30:28 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.953 06:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.953 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:32.953 06:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.953 06:30:28 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.953 06:30:28 -- rpc/rpc.sh@13 -- # jq length 00:05:32.953 06:30:28 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:32.953 06:30:28 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.953 06:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.953 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:32.953 06:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.953 06:30:28 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:32.953 06:30:28 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:32.953 06:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.953 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:32.953 06:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.953 06:30:28 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.953 { 00:05:32.953 "name": "Malloc0", 00:05:32.953 "aliases": [ 00:05:32.953 "371030d9-ed69-4ca4-a138-840cf224acae" 00:05:32.953 ], 00:05:32.953 "product_name": "Malloc disk", 00:05:32.953 "block_size": 512, 00:05:32.953 "num_blocks": 16384, 00:05:32.953 "uuid": "371030d9-ed69-4ca4-a138-840cf224acae", 00:05:32.953 "assigned_rate_limits": { 00:05:32.953 "rw_ios_per_sec": 0, 00:05:32.953 "rw_mbytes_per_sec": 0, 00:05:32.953 "r_mbytes_per_sec": 0, 00:05:32.953 "w_mbytes_per_sec": 0 00:05:32.953 }, 00:05:32.953 "claimed": false, 00:05:32.953 "zoned": false, 00:05:32.953 "supported_io_types": { 00:05:32.953 "read": true, 00:05:32.953 "write": true, 00:05:32.953 "unmap": true, 00:05:32.953 "write_zeroes": true, 00:05:32.953 "flush": true, 00:05:32.953 "reset": true, 00:05:32.953 "compare": false, 00:05:32.953 "compare_and_write": false, 00:05:32.953 "abort": true, 00:05:32.953 "nvme_admin": false, 00:05:32.953 "nvme_io": false 00:05:32.953 }, 00:05:32.953 "memory_domains": [ 00:05:32.953 { 00:05:32.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.953 "dma_device_type": 2 00:05:32.953 } 00:05:32.953 ], 00:05:32.953 "driver_specific": {} 00:05:32.953 } 00:05:32.953 ]' 00:05:32.953 06:30:28 -- rpc/rpc.sh@17 -- # jq length 00:05:33.213 06:30:28 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.213 06:30:28 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:33.213 06:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.213 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.213 [2024-12-05 06:30:28.458179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:33.213 [2024-12-05 06:30:28.458253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.213 [2024-12-05 06:30:28.458273] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12f4790 00:05:33.213 [2024-12-05 06:30:28.458282] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.213 [2024-12-05 06:30:28.459821] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.213 [2024-12-05 06:30:28.459869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.213 Passthru0 00:05:33.213 06:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.213 06:30:28 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.213 06:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.213 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.213 06:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.213 06:30:28 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.213 { 00:05:33.213 "name": "Malloc0", 00:05:33.213 "aliases": [ 00:05:33.213 "371030d9-ed69-4ca4-a138-840cf224acae" 00:05:33.213 ], 00:05:33.213 "product_name": "Malloc disk", 00:05:33.213 "block_size": 512, 00:05:33.213 "num_blocks": 16384, 00:05:33.213 "uuid": "371030d9-ed69-4ca4-a138-840cf224acae", 00:05:33.213 "assigned_rate_limits": { 00:05:33.213 "rw_ios_per_sec": 0, 00:05:33.213 "rw_mbytes_per_sec": 0, 00:05:33.213 "r_mbytes_per_sec": 0, 00:05:33.213 "w_mbytes_per_sec": 0 00:05:33.213 }, 00:05:33.213 "claimed": true, 00:05:33.213 "claim_type": "exclusive_write", 00:05:33.213 "zoned": false, 00:05:33.213 "supported_io_types": { 00:05:33.213 "read": true, 00:05:33.213 "write": true, 00:05:33.213 "unmap": true, 00:05:33.213 "write_zeroes": true, 00:05:33.213 "flush": true, 00:05:33.213 "reset": true, 00:05:33.213 "compare": false, 00:05:33.213 "compare_and_write": false, 00:05:33.213 "abort": true, 00:05:33.213 "nvme_admin": false, 00:05:33.213 "nvme_io": false 00:05:33.213 }, 00:05:33.213 "memory_domains": [ 00:05:33.213 { 00:05:33.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.213 "dma_device_type": 2 00:05:33.213 } 00:05:33.213 ], 00:05:33.213 "driver_specific": {} 00:05:33.213 }, 00:05:33.213 { 00:05:33.213 "name": "Passthru0", 00:05:33.213 "aliases": [ 00:05:33.213 "e9a545b9-5e50-5f89-8f0f-4684f52db25f" 00:05:33.213 ], 00:05:33.213 "product_name": "passthru", 00:05:33.213 "block_size": 512, 00:05:33.213 "num_blocks": 16384, 00:05:33.213 "uuid": "e9a545b9-5e50-5f89-8f0f-4684f52db25f", 00:05:33.213 "assigned_rate_limits": { 00:05:33.213 "rw_ios_per_sec": 0, 00:05:33.213 "rw_mbytes_per_sec": 0, 00:05:33.213 "r_mbytes_per_sec": 0, 00:05:33.213 "w_mbytes_per_sec": 0 00:05:33.213 }, 00:05:33.213 "claimed": false, 00:05:33.213 "zoned": false, 00:05:33.213 "supported_io_types": { 00:05:33.213 "read": true, 00:05:33.213 "write": true, 00:05:33.213 "unmap": true, 00:05:33.213 "write_zeroes": true, 00:05:33.213 "flush": true, 00:05:33.213 "reset": true, 00:05:33.213 "compare": false, 00:05:33.213 "compare_and_write": false, 00:05:33.213 "abort": true, 00:05:33.213 "nvme_admin": false, 00:05:33.213 "nvme_io": false 00:05:33.213 }, 00:05:33.213 "memory_domains": [ 00:05:33.213 { 00:05:33.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.213 "dma_device_type": 2 00:05:33.213 } 00:05:33.213 ], 00:05:33.213 "driver_specific": { 00:05:33.213 "passthru": { 00:05:33.213 "name": "Passthru0", 00:05:33.213 "base_bdev_name": "Malloc0" 00:05:33.213 } 00:05:33.213 } 00:05:33.213 } 00:05:33.213 ]' 00:05:33.213 06:30:28 -- rpc/rpc.sh@21 -- # jq length 00:05:33.213 06:30:28 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:33.213 06:30:28 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:33.213 06:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.213 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.213 06:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.213 06:30:28 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:33.213 06:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.213 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.213 06:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.213 06:30:28 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:33.213 06:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.213 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.213 06:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.214 06:30:28 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:33.214 06:30:28 -- rpc/rpc.sh@26 -- # jq length 00:05:33.214 06:30:28 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:33.214 00:05:33.214 real 0m0.287s 00:05:33.214 user 0m0.190s 00:05:33.214 sys 0m0.034s 00:05:33.214 06:30:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.214 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.214 ************************************ 00:05:33.214 END TEST rpc_integrity 00:05:33.214 ************************************ 00:05:33.214 06:30:28 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:33.214 06:30:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.214 06:30:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.214 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.214 ************************************ 00:05:33.214 START TEST rpc_plugins 00:05:33.214 ************************************ 00:05:33.214 06:30:28 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:33.214 06:30:28 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:33.214 06:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.214 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.214 06:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.214 06:30:28 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:33.214 06:30:28 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:33.214 06:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.214 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.473 06:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.473 06:30:28 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:33.473 { 00:05:33.473 "name": "Malloc1", 00:05:33.473 "aliases": [ 00:05:33.473 "c9415536-f553-47dc-8cee-9edc8965824d" 00:05:33.473 ], 00:05:33.473 "product_name": "Malloc disk", 00:05:33.473 "block_size": 4096, 00:05:33.473 "num_blocks": 256, 00:05:33.473 "uuid": "c9415536-f553-47dc-8cee-9edc8965824d", 00:05:33.473 "assigned_rate_limits": { 00:05:33.473 "rw_ios_per_sec": 0, 00:05:33.473 "rw_mbytes_per_sec": 0, 00:05:33.473 "r_mbytes_per_sec": 0, 00:05:33.473 "w_mbytes_per_sec": 0 00:05:33.473 }, 00:05:33.473 "claimed": false, 00:05:33.473 "zoned": false, 00:05:33.473 "supported_io_types": { 00:05:33.473 "read": true, 00:05:33.473 "write": true, 00:05:33.473 "unmap": true, 00:05:33.473 "write_zeroes": true, 00:05:33.473 "flush": true, 00:05:33.473 "reset": true, 00:05:33.473 "compare": false, 00:05:33.473 "compare_and_write": false, 00:05:33.473 "abort": true, 00:05:33.473 "nvme_admin": false, 00:05:33.473 "nvme_io": false 00:05:33.473 }, 00:05:33.473 "memory_domains": [ 00:05:33.473 { 00:05:33.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.473 "dma_device_type": 2 00:05:33.473 } 00:05:33.473 ], 00:05:33.473 "driver_specific": {} 00:05:33.473 } 00:05:33.473 ]' 00:05:33.473 06:30:28 -- rpc/rpc.sh@32 -- # jq length 00:05:33.473 06:30:28 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:33.473 06:30:28 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:33.473 06:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.473 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.473 06:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.473 06:30:28 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:33.473 06:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.473 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.473 06:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.473 06:30:28 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:33.473 06:30:28 -- rpc/rpc.sh@36 -- # jq length 00:05:33.473 06:30:28 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:33.473 00:05:33.473 real 0m0.159s 00:05:33.473 user 0m0.106s 00:05:33.473 sys 0m0.018s 00:05:33.473 06:30:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.473 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.473 ************************************ 00:05:33.473 END TEST rpc_plugins 00:05:33.473 ************************************ 00:05:33.473 06:30:28 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:33.473 06:30:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.473 06:30:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.473 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.473 ************************************ 00:05:33.473 START TEST rpc_trace_cmd_test 00:05:33.473 ************************************ 00:05:33.473 06:30:28 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:33.474 06:30:28 -- rpc/rpc.sh@40 -- # local info 00:05:33.474 06:30:28 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:33.474 06:30:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.474 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.474 06:30:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.474 06:30:28 -- rpc/rpc.sh@42 -- # info='{ 00:05:33.474 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65590", 00:05:33.474 "tpoint_group_mask": "0x8", 00:05:33.474 "iscsi_conn": { 00:05:33.474 "mask": "0x2", 00:05:33.474 "tpoint_mask": "0x0" 00:05:33.474 }, 00:05:33.474 "scsi": { 00:05:33.474 "mask": "0x4", 00:05:33.474 "tpoint_mask": "0x0" 00:05:33.474 }, 00:05:33.474 "bdev": { 00:05:33.474 "mask": "0x8", 00:05:33.474 "tpoint_mask": "0xffffffffffffffff" 00:05:33.474 }, 00:05:33.474 "nvmf_rdma": { 00:05:33.474 "mask": "0x10", 00:05:33.474 "tpoint_mask": "0x0" 00:05:33.474 }, 00:05:33.474 "nvmf_tcp": { 00:05:33.474 "mask": "0x20", 00:05:33.474 "tpoint_mask": "0x0" 00:05:33.474 }, 00:05:33.474 "ftl": { 00:05:33.474 "mask": "0x40", 00:05:33.474 "tpoint_mask": "0x0" 00:05:33.474 }, 00:05:33.474 "blobfs": { 00:05:33.474 "mask": "0x80", 00:05:33.474 "tpoint_mask": "0x0" 00:05:33.474 }, 00:05:33.474 "dsa": { 00:05:33.474 "mask": "0x200", 00:05:33.474 "tpoint_mask": "0x0" 00:05:33.474 }, 00:05:33.474 "thread": { 00:05:33.474 "mask": "0x400", 00:05:33.474 "tpoint_mask": "0x0" 00:05:33.474 }, 00:05:33.474 "nvme_pcie": { 00:05:33.474 "mask": "0x800", 00:05:33.474 "tpoint_mask": "0x0" 00:05:33.474 }, 00:05:33.474 "iaa": { 00:05:33.474 "mask": "0x1000", 00:05:33.474 "tpoint_mask": "0x0" 00:05:33.474 }, 00:05:33.474 "nvme_tcp": { 00:05:33.474 "mask": "0x2000", 00:05:33.474 "tpoint_mask": "0x0" 00:05:33.474 }, 00:05:33.474 "bdev_nvme": { 00:05:33.474 "mask": "0x4000", 00:05:33.474 "tpoint_mask": "0x0" 00:05:33.474 } 00:05:33.474 }' 00:05:33.474 06:30:28 -- rpc/rpc.sh@43 -- # jq length 00:05:33.732 06:30:28 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:33.732 06:30:28 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:33.732 06:30:28 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:33.732 06:30:28 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:33.732 06:30:29 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:33.732 06:30:29 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:33.732 06:30:29 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:33.732 06:30:29 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:33.732 06:30:29 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:33.732 00:05:33.732 real 0m0.276s 00:05:33.732 user 0m0.237s 00:05:33.732 sys 0m0.030s 00:05:33.732 06:30:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.732 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:33.732 ************************************ 00:05:33.732 END TEST rpc_trace_cmd_test 00:05:33.732 ************************************ 00:05:33.732 06:30:29 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:33.732 06:30:29 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:33.732 06:30:29 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:33.732 06:30:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.732 06:30:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.732 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:33.732 ************************************ 00:05:33.732 START TEST rpc_daemon_integrity 00:05:33.732 ************************************ 00:05:33.732 06:30:29 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:33.732 06:30:29 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:33.732 06:30:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.732 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:33.991 06:30:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.991 06:30:29 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:33.991 06:30:29 -- rpc/rpc.sh@13 -- # jq length 00:05:33.991 06:30:29 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:33.991 06:30:29 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:33.991 06:30:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.991 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:33.991 06:30:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.991 06:30:29 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:33.991 06:30:29 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:33.991 06:30:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.991 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:33.991 06:30:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.991 06:30:29 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:33.991 { 00:05:33.991 "name": "Malloc2", 00:05:33.991 "aliases": [ 00:05:33.991 "b41f5bf0-c58f-42c9-9e77-3afe4acd2464" 00:05:33.991 ], 00:05:33.991 "product_name": "Malloc disk", 00:05:33.991 "block_size": 512, 00:05:33.991 "num_blocks": 16384, 00:05:33.991 "uuid": "b41f5bf0-c58f-42c9-9e77-3afe4acd2464", 00:05:33.991 "assigned_rate_limits": { 00:05:33.991 "rw_ios_per_sec": 0, 00:05:33.991 "rw_mbytes_per_sec": 0, 00:05:33.991 "r_mbytes_per_sec": 0, 00:05:33.991 "w_mbytes_per_sec": 0 00:05:33.991 }, 00:05:33.991 "claimed": false, 00:05:33.991 "zoned": false, 00:05:33.991 "supported_io_types": { 00:05:33.991 "read": true, 00:05:33.991 "write": true, 00:05:33.991 "unmap": true, 00:05:33.991 "write_zeroes": true, 00:05:33.991 "flush": true, 00:05:33.991 "reset": true, 00:05:33.991 "compare": false, 00:05:33.991 "compare_and_write": false, 00:05:33.991 "abort": true, 00:05:33.991 "nvme_admin": false, 00:05:33.991 "nvme_io": false 00:05:33.991 }, 00:05:33.991 "memory_domains": [ 00:05:33.991 { 00:05:33.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.991 "dma_device_type": 2 00:05:33.991 } 00:05:33.991 ], 00:05:33.991 "driver_specific": {} 00:05:33.991 } 00:05:33.991 ]' 00:05:33.991 06:30:29 -- rpc/rpc.sh@17 -- # jq length 00:05:33.991 06:30:29 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.991 06:30:29 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:33.991 06:30:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.991 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:33.991 [2024-12-05 06:30:29.338577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:33.991 [2024-12-05 06:30:29.338660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.991 [2024-12-05 06:30:29.338677] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12e5fe0 00:05:33.991 [2024-12-05 06:30:29.338687] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.991 [2024-12-05 06:30:29.339984] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.991 [2024-12-05 06:30:29.340029] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.991 Passthru0 00:05:33.991 06:30:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.991 06:30:29 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.991 06:30:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.991 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:33.991 06:30:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.991 06:30:29 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.991 { 00:05:33.991 "name": "Malloc2", 00:05:33.991 "aliases": [ 00:05:33.991 "b41f5bf0-c58f-42c9-9e77-3afe4acd2464" 00:05:33.991 ], 00:05:33.991 "product_name": "Malloc disk", 00:05:33.991 "block_size": 512, 00:05:33.991 "num_blocks": 16384, 00:05:33.991 "uuid": "b41f5bf0-c58f-42c9-9e77-3afe4acd2464", 00:05:33.991 "assigned_rate_limits": { 00:05:33.991 "rw_ios_per_sec": 0, 00:05:33.991 "rw_mbytes_per_sec": 0, 00:05:33.991 "r_mbytes_per_sec": 0, 00:05:33.991 "w_mbytes_per_sec": 0 00:05:33.991 }, 00:05:33.991 "claimed": true, 00:05:33.991 "claim_type": "exclusive_write", 00:05:33.991 "zoned": false, 00:05:33.991 "supported_io_types": { 00:05:33.991 "read": true, 00:05:33.991 "write": true, 00:05:33.991 "unmap": true, 00:05:33.991 "write_zeroes": true, 00:05:33.991 "flush": true, 00:05:33.991 "reset": true, 00:05:33.991 "compare": false, 00:05:33.991 "compare_and_write": false, 00:05:33.991 "abort": true, 00:05:33.991 "nvme_admin": false, 00:05:33.991 "nvme_io": false 00:05:33.991 }, 00:05:33.991 "memory_domains": [ 00:05:33.991 { 00:05:33.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.991 "dma_device_type": 2 00:05:33.991 } 00:05:33.991 ], 00:05:33.991 "driver_specific": {} 00:05:33.991 }, 00:05:33.991 { 00:05:33.991 "name": "Passthru0", 00:05:33.991 "aliases": [ 00:05:33.991 "993cb8ac-a945-55a3-a2e6-aca79ee5d40e" 00:05:33.991 ], 00:05:33.991 "product_name": "passthru", 00:05:33.991 "block_size": 512, 00:05:33.991 "num_blocks": 16384, 00:05:33.991 "uuid": "993cb8ac-a945-55a3-a2e6-aca79ee5d40e", 00:05:33.991 "assigned_rate_limits": { 00:05:33.991 "rw_ios_per_sec": 0, 00:05:33.991 "rw_mbytes_per_sec": 0, 00:05:33.991 "r_mbytes_per_sec": 0, 00:05:33.991 "w_mbytes_per_sec": 0 00:05:33.991 }, 00:05:33.991 "claimed": false, 00:05:33.991 "zoned": false, 00:05:33.991 "supported_io_types": { 00:05:33.991 "read": true, 00:05:33.991 "write": true, 00:05:33.991 "unmap": true, 00:05:33.991 "write_zeroes": true, 00:05:33.991 "flush": true, 00:05:33.991 "reset": true, 00:05:33.991 "compare": false, 00:05:33.991 "compare_and_write": false, 00:05:33.991 "abort": true, 00:05:33.991 "nvme_admin": false, 00:05:33.991 "nvme_io": false 00:05:33.991 }, 00:05:33.991 "memory_domains": [ 00:05:33.991 { 00:05:33.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.991 "dma_device_type": 2 00:05:33.991 } 00:05:33.991 ], 00:05:33.991 "driver_specific": { 00:05:33.991 "passthru": { 00:05:33.991 "name": "Passthru0", 00:05:33.991 "base_bdev_name": "Malloc2" 00:05:33.991 } 00:05:33.991 } 00:05:33.991 } 00:05:33.991 ]' 00:05:33.991 06:30:29 -- rpc/rpc.sh@21 -- # jq length 00:05:33.991 06:30:29 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:33.991 06:30:29 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:33.991 06:30:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.991 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:33.991 06:30:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.991 06:30:29 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:33.991 06:30:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.991 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:33.991 06:30:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.991 06:30:29 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:33.991 06:30:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.991 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:34.250 06:30:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.250 06:30:29 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:34.250 06:30:29 -- rpc/rpc.sh@26 -- # jq length 00:05:34.250 06:30:29 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:34.250 00:05:34.250 real 0m0.315s 00:05:34.250 user 0m0.219s 00:05:34.250 sys 0m0.031s 00:05:34.250 06:30:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.250 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:34.250 ************************************ 00:05:34.250 END TEST rpc_daemon_integrity 00:05:34.250 ************************************ 00:05:34.250 06:30:29 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:34.250 06:30:29 -- rpc/rpc.sh@84 -- # killprocess 65590 00:05:34.250 06:30:29 -- common/autotest_common.sh@936 -- # '[' -z 65590 ']' 00:05:34.250 06:30:29 -- common/autotest_common.sh@940 -- # kill -0 65590 00:05:34.250 06:30:29 -- common/autotest_common.sh@941 -- # uname 00:05:34.250 06:30:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:34.250 06:30:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65590 00:05:34.250 06:30:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:34.250 06:30:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:34.250 killing process with pid 65590 00:05:34.250 06:30:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65590' 00:05:34.250 06:30:29 -- common/autotest_common.sh@955 -- # kill 65590 00:05:34.250 06:30:29 -- common/autotest_common.sh@960 -- # wait 65590 00:05:34.509 00:05:34.509 real 0m2.761s 00:05:34.509 user 0m3.728s 00:05:34.509 sys 0m0.559s 00:05:34.509 06:30:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.509 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:34.509 ************************************ 00:05:34.509 END TEST rpc 00:05:34.509 ************************************ 00:05:34.509 06:30:29 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.509 06:30:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.509 06:30:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.509 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:34.509 ************************************ 00:05:34.509 START TEST rpc_client 00:05:34.509 ************************************ 00:05:34.509 06:30:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.509 * Looking for test storage... 00:05:34.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:34.509 06:30:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:34.509 06:30:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:34.509 06:30:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:34.767 06:30:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:34.767 06:30:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:34.767 06:30:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:34.767 06:30:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:34.767 06:30:30 -- scripts/common.sh@335 -- # IFS=.-: 00:05:34.767 06:30:30 -- scripts/common.sh@335 -- # read -ra ver1 00:05:34.767 06:30:30 -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.767 06:30:30 -- scripts/common.sh@336 -- # read -ra ver2 00:05:34.767 06:30:30 -- scripts/common.sh@337 -- # local 'op=<' 00:05:34.767 06:30:30 -- scripts/common.sh@339 -- # ver1_l=2 00:05:34.767 06:30:30 -- scripts/common.sh@340 -- # ver2_l=1 00:05:34.767 06:30:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:34.767 06:30:30 -- scripts/common.sh@343 -- # case "$op" in 00:05:34.767 06:30:30 -- scripts/common.sh@344 -- # : 1 00:05:34.767 06:30:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:34.767 06:30:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.767 06:30:30 -- scripts/common.sh@364 -- # decimal 1 00:05:34.767 06:30:30 -- scripts/common.sh@352 -- # local d=1 00:05:34.767 06:30:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.767 06:30:30 -- scripts/common.sh@354 -- # echo 1 00:05:34.767 06:30:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:34.767 06:30:30 -- scripts/common.sh@365 -- # decimal 2 00:05:34.767 06:30:30 -- scripts/common.sh@352 -- # local d=2 00:05:34.767 06:30:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.767 06:30:30 -- scripts/common.sh@354 -- # echo 2 00:05:34.767 06:30:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:34.767 06:30:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:34.767 06:30:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:34.767 06:30:30 -- scripts/common.sh@367 -- # return 0 00:05:34.767 06:30:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.767 06:30:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:34.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.767 --rc genhtml_branch_coverage=1 00:05:34.767 --rc genhtml_function_coverage=1 00:05:34.767 --rc genhtml_legend=1 00:05:34.767 --rc geninfo_all_blocks=1 00:05:34.767 --rc geninfo_unexecuted_blocks=1 00:05:34.767 00:05:34.767 ' 00:05:34.767 06:30:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:34.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.767 --rc genhtml_branch_coverage=1 00:05:34.767 --rc genhtml_function_coverage=1 00:05:34.767 --rc genhtml_legend=1 00:05:34.767 --rc geninfo_all_blocks=1 00:05:34.767 --rc geninfo_unexecuted_blocks=1 00:05:34.767 00:05:34.767 ' 00:05:34.767 06:30:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:34.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.767 --rc genhtml_branch_coverage=1 00:05:34.767 --rc genhtml_function_coverage=1 00:05:34.767 --rc genhtml_legend=1 00:05:34.768 --rc geninfo_all_blocks=1 00:05:34.768 --rc geninfo_unexecuted_blocks=1 00:05:34.768 00:05:34.768 ' 00:05:34.768 06:30:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:34.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.768 --rc genhtml_branch_coverage=1 00:05:34.768 --rc genhtml_function_coverage=1 00:05:34.768 --rc genhtml_legend=1 00:05:34.768 --rc geninfo_all_blocks=1 00:05:34.768 --rc geninfo_unexecuted_blocks=1 00:05:34.768 00:05:34.768 ' 00:05:34.768 06:30:30 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:34.768 OK 00:05:34.768 06:30:30 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:34.768 00:05:34.768 real 0m0.241s 00:05:34.768 user 0m0.168s 00:05:34.768 sys 0m0.083s 00:05:34.768 06:30:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.768 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:34.768 ************************************ 00:05:34.768 END TEST rpc_client 00:05:34.768 ************************************ 00:05:34.768 06:30:30 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:34.768 06:30:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.768 06:30:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.768 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:34.768 ************************************ 00:05:34.768 START TEST json_config 00:05:34.768 ************************************ 00:05:34.768 06:30:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:34.768 06:30:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:34.768 06:30:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:34.768 06:30:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:35.027 06:30:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:35.027 06:30:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:35.027 06:30:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:35.027 06:30:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:35.027 06:30:30 -- scripts/common.sh@335 -- # IFS=.-: 00:05:35.027 06:30:30 -- scripts/common.sh@335 -- # read -ra ver1 00:05:35.027 06:30:30 -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.027 06:30:30 -- scripts/common.sh@336 -- # read -ra ver2 00:05:35.027 06:30:30 -- scripts/common.sh@337 -- # local 'op=<' 00:05:35.027 06:30:30 -- scripts/common.sh@339 -- # ver1_l=2 00:05:35.027 06:30:30 -- scripts/common.sh@340 -- # ver2_l=1 00:05:35.027 06:30:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:35.027 06:30:30 -- scripts/common.sh@343 -- # case "$op" in 00:05:35.027 06:30:30 -- scripts/common.sh@344 -- # : 1 00:05:35.027 06:30:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:35.027 06:30:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.027 06:30:30 -- scripts/common.sh@364 -- # decimal 1 00:05:35.027 06:30:30 -- scripts/common.sh@352 -- # local d=1 00:05:35.027 06:30:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.027 06:30:30 -- scripts/common.sh@354 -- # echo 1 00:05:35.027 06:30:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:35.027 06:30:30 -- scripts/common.sh@365 -- # decimal 2 00:05:35.027 06:30:30 -- scripts/common.sh@352 -- # local d=2 00:05:35.027 06:30:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.027 06:30:30 -- scripts/common.sh@354 -- # echo 2 00:05:35.027 06:30:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:35.027 06:30:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:35.027 06:30:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:35.027 06:30:30 -- scripts/common.sh@367 -- # return 0 00:05:35.027 06:30:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.027 06:30:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:35.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.027 --rc genhtml_branch_coverage=1 00:05:35.027 --rc genhtml_function_coverage=1 00:05:35.027 --rc genhtml_legend=1 00:05:35.027 --rc geninfo_all_blocks=1 00:05:35.027 --rc geninfo_unexecuted_blocks=1 00:05:35.027 00:05:35.027 ' 00:05:35.027 06:30:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:35.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.027 --rc genhtml_branch_coverage=1 00:05:35.027 --rc genhtml_function_coverage=1 00:05:35.027 --rc genhtml_legend=1 00:05:35.027 --rc geninfo_all_blocks=1 00:05:35.027 --rc geninfo_unexecuted_blocks=1 00:05:35.027 00:05:35.027 ' 00:05:35.027 06:30:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:35.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.027 --rc genhtml_branch_coverage=1 00:05:35.027 --rc genhtml_function_coverage=1 00:05:35.027 --rc genhtml_legend=1 00:05:35.027 --rc geninfo_all_blocks=1 00:05:35.027 --rc geninfo_unexecuted_blocks=1 00:05:35.027 00:05:35.027 ' 00:05:35.027 06:30:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:35.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.027 --rc genhtml_branch_coverage=1 00:05:35.027 --rc genhtml_function_coverage=1 00:05:35.027 --rc genhtml_legend=1 00:05:35.027 --rc geninfo_all_blocks=1 00:05:35.027 --rc geninfo_unexecuted_blocks=1 00:05:35.027 00:05:35.027 ' 00:05:35.027 06:30:30 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.027 06:30:30 -- nvmf/common.sh@7 -- # uname -s 00:05:35.027 06:30:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.027 06:30:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.027 06:30:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.027 06:30:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.027 06:30:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.027 06:30:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.027 06:30:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.027 06:30:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.027 06:30:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.027 06:30:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.027 06:30:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:05:35.027 06:30:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:05:35.027 06:30:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.027 06:30:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.027 06:30:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.027 06:30:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.027 06:30:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.027 06:30:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.027 06:30:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.027 06:30:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.027 06:30:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.027 06:30:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.027 06:30:30 -- paths/export.sh@5 -- # export PATH 00:05:35.027 06:30:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.027 06:30:30 -- nvmf/common.sh@46 -- # : 0 00:05:35.027 06:30:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:35.027 06:30:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:35.027 06:30:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:35.027 06:30:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.027 06:30:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.027 06:30:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:35.027 06:30:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:35.027 06:30:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:35.028 06:30:30 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:35.028 06:30:30 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:35.028 06:30:30 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:35.028 06:30:30 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:35.028 06:30:30 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:35.028 06:30:30 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:35.028 06:30:30 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:35.028 06:30:30 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:35.028 06:30:30 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:35.028 06:30:30 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:35.028 06:30:30 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:35.028 06:30:30 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:35.028 06:30:30 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:35.028 06:30:30 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.028 INFO: JSON configuration test init 00:05:35.028 06:30:30 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:35.028 06:30:30 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:35.028 06:30:30 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:35.028 06:30:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.028 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:35.028 06:30:30 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:35.028 06:30:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.028 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:35.028 06:30:30 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:35.028 06:30:30 -- json_config/json_config.sh@98 -- # local app=target 00:05:35.028 06:30:30 -- json_config/json_config.sh@99 -- # shift 00:05:35.028 06:30:30 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:35.028 06:30:30 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:35.028 06:30:30 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:35.028 06:30:30 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:35.028 06:30:30 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:35.028 06:30:30 -- json_config/json_config.sh@111 -- # app_pid[$app]=65843 00:05:35.028 Waiting for target to run... 00:05:35.028 06:30:30 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:35.028 06:30:30 -- json_config/json_config.sh@114 -- # waitforlisten 65843 /var/tmp/spdk_tgt.sock 00:05:35.028 06:30:30 -- common/autotest_common.sh@829 -- # '[' -z 65843 ']' 00:05:35.028 06:30:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.028 06:30:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.028 06:30:30 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:35.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.028 06:30:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.028 06:30:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.028 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:05:35.028 [2024-12-05 06:30:30.396956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:35.028 [2024-12-05 06:30:30.397752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65843 ] 00:05:35.287 [2024-12-05 06:30:30.705396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.287 [2024-12-05 06:30:30.731870] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.287 [2024-12-05 06:30:30.732061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.223 00:05:36.223 06:30:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.223 06:30:31 -- common/autotest_common.sh@862 -- # return 0 00:05:36.223 06:30:31 -- json_config/json_config.sh@115 -- # echo '' 00:05:36.223 06:30:31 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:36.223 06:30:31 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:36.223 06:30:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:36.223 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:36.223 06:30:31 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:36.223 06:30:31 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:36.223 06:30:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:36.223 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:36.223 06:30:31 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:36.223 06:30:31 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:36.223 06:30:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:36.482 06:30:31 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:36.482 06:30:31 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:36.482 06:30:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:36.482 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:36.482 06:30:31 -- json_config/json_config.sh@48 -- # local ret=0 00:05:36.482 06:30:31 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:36.482 06:30:31 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:36.482 06:30:31 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:36.482 06:30:31 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:36.482 06:30:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:36.740 06:30:32 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:36.740 06:30:32 -- json_config/json_config.sh@51 -- # local get_types 00:05:36.740 06:30:32 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:36.740 06:30:32 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:36.740 06:30:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:36.740 06:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:36.741 06:30:32 -- json_config/json_config.sh@58 -- # return 0 00:05:36.741 06:30:32 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:36.741 06:30:32 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:36.741 06:30:32 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:36.741 06:30:32 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:36.741 06:30:32 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:36.741 06:30:32 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:36.741 06:30:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:36.741 06:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:36.741 06:30:32 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:36.741 06:30:32 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:36.741 06:30:32 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:36.741 06:30:32 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:36.741 06:30:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:36.999 MallocForNvmf0 00:05:37.259 06:30:32 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.259 06:30:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.259 MallocForNvmf1 00:05:37.259 06:30:32 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.259 06:30:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.517 [2024-12-05 06:30:32.868292] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.517 06:30:32 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:37.517 06:30:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:37.775 06:30:33 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:37.775 06:30:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:38.034 06:30:33 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.035 06:30:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.293 06:30:33 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.293 06:30:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.293 [2024-12-05 06:30:33.744842] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.552 06:30:33 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:38.552 06:30:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.552 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:38.552 06:30:33 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:38.552 06:30:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.552 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:38.552 06:30:33 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:38.552 06:30:33 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.552 06:30:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.810 MallocBdevForConfigChangeCheck 00:05:38.810 06:30:34 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:38.810 06:30:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.810 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:38.810 06:30:34 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:38.810 06:30:34 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.069 INFO: shutting down applications... 00:05:39.069 06:30:34 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:39.069 06:30:34 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:39.069 06:30:34 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:39.069 06:30:34 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:39.069 06:30:34 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:39.636 Calling clear_iscsi_subsystem 00:05:39.636 Calling clear_nvmf_subsystem 00:05:39.636 Calling clear_nbd_subsystem 00:05:39.636 Calling clear_ublk_subsystem 00:05:39.636 Calling clear_vhost_blk_subsystem 00:05:39.636 Calling clear_vhost_scsi_subsystem 00:05:39.636 Calling clear_scheduler_subsystem 00:05:39.636 Calling clear_bdev_subsystem 00:05:39.636 Calling clear_accel_subsystem 00:05:39.636 Calling clear_vmd_subsystem 00:05:39.636 Calling clear_sock_subsystem 00:05:39.636 Calling clear_iobuf_subsystem 00:05:39.636 06:30:34 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:39.636 06:30:34 -- json_config/json_config.sh@396 -- # count=100 00:05:39.636 06:30:34 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:39.636 06:30:34 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.636 06:30:34 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:39.636 06:30:34 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:39.894 06:30:35 -- json_config/json_config.sh@398 -- # break 00:05:39.894 06:30:35 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:39.894 06:30:35 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:39.894 06:30:35 -- json_config/json_config.sh@120 -- # local app=target 00:05:39.894 06:30:35 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:39.894 06:30:35 -- json_config/json_config.sh@124 -- # [[ -n 65843 ]] 00:05:39.894 06:30:35 -- json_config/json_config.sh@127 -- # kill -SIGINT 65843 00:05:39.894 06:30:35 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:39.894 06:30:35 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:39.894 06:30:35 -- json_config/json_config.sh@130 -- # kill -0 65843 00:05:39.894 06:30:35 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:40.461 06:30:35 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:40.461 06:30:35 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:40.461 06:30:35 -- json_config/json_config.sh@130 -- # kill -0 65843 00:05:40.461 06:30:35 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:40.461 06:30:35 -- json_config/json_config.sh@132 -- # break 00:05:40.461 06:30:35 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:40.461 SPDK target shutdown done 00:05:40.461 06:30:35 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:40.461 INFO: relaunching applications... 00:05:40.461 06:30:35 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:40.461 06:30:35 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:40.461 06:30:35 -- json_config/json_config.sh@98 -- # local app=target 00:05:40.461 06:30:35 -- json_config/json_config.sh@99 -- # shift 00:05:40.461 06:30:35 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:40.461 06:30:35 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:40.461 06:30:35 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:40.461 06:30:35 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:40.461 06:30:35 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:40.461 06:30:35 -- json_config/json_config.sh@111 -- # app_pid[$app]=66028 00:05:40.461 Waiting for target to run... 00:05:40.461 06:30:35 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:40.461 06:30:35 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:40.461 06:30:35 -- json_config/json_config.sh@114 -- # waitforlisten 66028 /var/tmp/spdk_tgt.sock 00:05:40.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.461 06:30:35 -- common/autotest_common.sh@829 -- # '[' -z 66028 ']' 00:05:40.461 06:30:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.461 06:30:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.461 06:30:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.461 06:30:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.461 06:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:40.461 [2024-12-05 06:30:35.731709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:40.461 [2024-12-05 06:30:35.731967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66028 ] 00:05:40.720 [2024-12-05 06:30:36.030725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.720 [2024-12-05 06:30:36.048983] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.720 [2024-12-05 06:30:36.049404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.978 [2024-12-05 06:30:36.338804] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.978 [2024-12-05 06:30:36.370881] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:41.545 00:05:41.545 INFO: Checking if target configuration is the same... 00:05:41.545 06:30:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.545 06:30:36 -- common/autotest_common.sh@862 -- # return 0 00:05:41.545 06:30:36 -- json_config/json_config.sh@115 -- # echo '' 00:05:41.545 06:30:36 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:41.545 06:30:36 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:41.545 06:30:36 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:41.545 06:30:36 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:41.545 06:30:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.545 + '[' 2 -ne 2 ']' 00:05:41.545 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:41.545 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:41.545 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:41.545 +++ basename /dev/fd/62 00:05:41.545 ++ mktemp /tmp/62.XXX 00:05:41.545 + tmp_file_1=/tmp/62.bJS 00:05:41.545 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:41.545 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:41.545 + tmp_file_2=/tmp/spdk_tgt_config.json.cMZ 00:05:41.545 + ret=0 00:05:41.545 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:41.804 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:41.804 + diff -u /tmp/62.bJS /tmp/spdk_tgt_config.json.cMZ 00:05:41.804 INFO: JSON config files are the same 00:05:41.804 + echo 'INFO: JSON config files are the same' 00:05:41.804 + rm /tmp/62.bJS /tmp/spdk_tgt_config.json.cMZ 00:05:41.804 + exit 0 00:05:41.804 INFO: changing configuration and checking if this can be detected... 00:05:41.804 06:30:37 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:41.804 06:30:37 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:41.804 06:30:37 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:41.804 06:30:37 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:42.063 06:30:37 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:42.063 06:30:37 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:42.063 06:30:37 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.063 + '[' 2 -ne 2 ']' 00:05:42.063 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:42.063 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:42.063 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:42.063 +++ basename /dev/fd/62 00:05:42.063 ++ mktemp /tmp/62.XXX 00:05:42.063 + tmp_file_1=/tmp/62.qJj 00:05:42.063 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:42.063 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:42.063 + tmp_file_2=/tmp/spdk_tgt_config.json.1FH 00:05:42.063 + ret=0 00:05:42.063 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:42.322 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:42.582 + diff -u /tmp/62.qJj /tmp/spdk_tgt_config.json.1FH 00:05:42.582 + ret=1 00:05:42.582 + echo '=== Start of file: /tmp/62.qJj ===' 00:05:42.582 + cat /tmp/62.qJj 00:05:42.582 + echo '=== End of file: /tmp/62.qJj ===' 00:05:42.582 + echo '' 00:05:42.582 + echo '=== Start of file: /tmp/spdk_tgt_config.json.1FH ===' 00:05:42.582 + cat /tmp/spdk_tgt_config.json.1FH 00:05:42.582 + echo '=== End of file: /tmp/spdk_tgt_config.json.1FH ===' 00:05:42.582 + echo '' 00:05:42.582 + rm /tmp/62.qJj /tmp/spdk_tgt_config.json.1FH 00:05:42.582 + exit 1 00:05:42.582 INFO: configuration change detected. 00:05:42.582 06:30:37 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:42.582 06:30:37 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:42.582 06:30:37 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:42.582 06:30:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.582 06:30:37 -- common/autotest_common.sh@10 -- # set +x 00:05:42.582 06:30:37 -- json_config/json_config.sh@360 -- # local ret=0 00:05:42.582 06:30:37 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:42.582 06:30:37 -- json_config/json_config.sh@370 -- # [[ -n 66028 ]] 00:05:42.582 06:30:37 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:42.582 06:30:37 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:42.582 06:30:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.582 06:30:37 -- common/autotest_common.sh@10 -- # set +x 00:05:42.582 06:30:37 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:42.582 06:30:37 -- json_config/json_config.sh@246 -- # uname -s 00:05:42.582 06:30:37 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:42.582 06:30:37 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:42.582 06:30:37 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:42.582 06:30:37 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:42.582 06:30:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.582 06:30:37 -- common/autotest_common.sh@10 -- # set +x 00:05:42.582 06:30:37 -- json_config/json_config.sh@376 -- # killprocess 66028 00:05:42.582 06:30:37 -- common/autotest_common.sh@936 -- # '[' -z 66028 ']' 00:05:42.582 06:30:37 -- common/autotest_common.sh@940 -- # kill -0 66028 00:05:42.582 06:30:37 -- common/autotest_common.sh@941 -- # uname 00:05:42.582 06:30:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.582 06:30:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66028 00:05:42.582 killing process with pid 66028 00:05:42.582 06:30:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.582 06:30:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.582 06:30:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66028' 00:05:42.582 06:30:37 -- common/autotest_common.sh@955 -- # kill 66028 00:05:42.582 06:30:37 -- common/autotest_common.sh@960 -- # wait 66028 00:05:42.841 06:30:38 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:42.841 06:30:38 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:42.841 06:30:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.841 06:30:38 -- common/autotest_common.sh@10 -- # set +x 00:05:42.841 INFO: Success 00:05:42.841 06:30:38 -- json_config/json_config.sh@381 -- # return 0 00:05:42.841 06:30:38 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:42.841 ************************************ 00:05:42.841 END TEST json_config 00:05:42.841 ************************************ 00:05:42.841 00:05:42.841 real 0m8.002s 00:05:42.841 user 0m11.547s 00:05:42.841 sys 0m1.408s 00:05:42.841 06:30:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.841 06:30:38 -- common/autotest_common.sh@10 -- # set +x 00:05:42.841 06:30:38 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:42.841 06:30:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.841 06:30:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.841 06:30:38 -- common/autotest_common.sh@10 -- # set +x 00:05:42.841 ************************************ 00:05:42.841 START TEST json_config_extra_key 00:05:42.841 ************************************ 00:05:42.841 06:30:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:42.841 06:30:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:42.841 06:30:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:42.841 06:30:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:43.101 06:30:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:43.101 06:30:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:43.101 06:30:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:43.101 06:30:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:43.101 06:30:38 -- scripts/common.sh@335 -- # IFS=.-: 00:05:43.101 06:30:38 -- scripts/common.sh@335 -- # read -ra ver1 00:05:43.101 06:30:38 -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.101 06:30:38 -- scripts/common.sh@336 -- # read -ra ver2 00:05:43.101 06:30:38 -- scripts/common.sh@337 -- # local 'op=<' 00:05:43.101 06:30:38 -- scripts/common.sh@339 -- # ver1_l=2 00:05:43.101 06:30:38 -- scripts/common.sh@340 -- # ver2_l=1 00:05:43.101 06:30:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:43.101 06:30:38 -- scripts/common.sh@343 -- # case "$op" in 00:05:43.101 06:30:38 -- scripts/common.sh@344 -- # : 1 00:05:43.101 06:30:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:43.101 06:30:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.101 06:30:38 -- scripts/common.sh@364 -- # decimal 1 00:05:43.101 06:30:38 -- scripts/common.sh@352 -- # local d=1 00:05:43.101 06:30:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.101 06:30:38 -- scripts/common.sh@354 -- # echo 1 00:05:43.101 06:30:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:43.101 06:30:38 -- scripts/common.sh@365 -- # decimal 2 00:05:43.101 06:30:38 -- scripts/common.sh@352 -- # local d=2 00:05:43.101 06:30:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.101 06:30:38 -- scripts/common.sh@354 -- # echo 2 00:05:43.101 06:30:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:43.101 06:30:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:43.101 06:30:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:43.101 06:30:38 -- scripts/common.sh@367 -- # return 0 00:05:43.101 06:30:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.101 06:30:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:43.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.101 --rc genhtml_branch_coverage=1 00:05:43.101 --rc genhtml_function_coverage=1 00:05:43.101 --rc genhtml_legend=1 00:05:43.101 --rc geninfo_all_blocks=1 00:05:43.101 --rc geninfo_unexecuted_blocks=1 00:05:43.101 00:05:43.101 ' 00:05:43.101 06:30:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:43.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.101 --rc genhtml_branch_coverage=1 00:05:43.101 --rc genhtml_function_coverage=1 00:05:43.101 --rc genhtml_legend=1 00:05:43.101 --rc geninfo_all_blocks=1 00:05:43.101 --rc geninfo_unexecuted_blocks=1 00:05:43.101 00:05:43.101 ' 00:05:43.101 06:30:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:43.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.101 --rc genhtml_branch_coverage=1 00:05:43.101 --rc genhtml_function_coverage=1 00:05:43.101 --rc genhtml_legend=1 00:05:43.101 --rc geninfo_all_blocks=1 00:05:43.101 --rc geninfo_unexecuted_blocks=1 00:05:43.101 00:05:43.101 ' 00:05:43.101 06:30:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:43.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.101 --rc genhtml_branch_coverage=1 00:05:43.101 --rc genhtml_function_coverage=1 00:05:43.101 --rc genhtml_legend=1 00:05:43.101 --rc geninfo_all_blocks=1 00:05:43.101 --rc geninfo_unexecuted_blocks=1 00:05:43.101 00:05:43.101 ' 00:05:43.101 06:30:38 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:43.101 06:30:38 -- nvmf/common.sh@7 -- # uname -s 00:05:43.101 06:30:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.101 06:30:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.101 06:30:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.101 06:30:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.101 06:30:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.101 06:30:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.101 06:30:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.101 06:30:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.101 06:30:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.101 06:30:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.101 06:30:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:05:43.101 06:30:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:05:43.101 06:30:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.101 06:30:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.101 06:30:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.101 06:30:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:43.101 06:30:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.102 06:30:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.102 06:30:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.102 06:30:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.102 06:30:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.102 06:30:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.102 06:30:38 -- paths/export.sh@5 -- # export PATH 00:05:43.102 06:30:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.102 06:30:38 -- nvmf/common.sh@46 -- # : 0 00:05:43.102 06:30:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:43.102 06:30:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:43.102 06:30:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:43.102 06:30:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.102 06:30:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.102 06:30:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:43.102 06:30:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:43.102 06:30:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:43.102 INFO: launching applications... 00:05:43.102 Waiting for target to run... 00:05:43.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=66180 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 66180 /var/tmp/spdk_tgt.sock 00:05:43.102 06:30:38 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:43.102 06:30:38 -- common/autotest_common.sh@829 -- # '[' -z 66180 ']' 00:05:43.102 06:30:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.102 06:30:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.102 06:30:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.102 06:30:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.102 06:30:38 -- common/autotest_common.sh@10 -- # set +x 00:05:43.102 [2024-12-05 06:30:38.439371] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:43.102 [2024-12-05 06:30:38.439672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66180 ] 00:05:43.361 [2024-12-05 06:30:38.742702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.361 [2024-12-05 06:30:38.760664] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.361 [2024-12-05 06:30:38.761094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.298 06:30:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.298 06:30:39 -- common/autotest_common.sh@862 -- # return 0 00:05:44.298 06:30:39 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:44.298 00:05:44.298 06:30:39 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:44.298 INFO: shutting down applications... 00:05:44.298 06:30:39 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:44.298 06:30:39 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:44.298 06:30:39 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:44.298 06:30:39 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 66180 ]] 00:05:44.298 06:30:39 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 66180 00:05:44.298 06:30:39 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:44.298 06:30:39 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:44.298 06:30:39 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66180 00:05:44.298 06:30:39 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:44.556 06:30:39 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:44.556 06:30:39 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:44.556 06:30:39 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66180 00:05:44.556 06:30:39 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:44.556 06:30:39 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:44.556 06:30:39 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:44.556 SPDK target shutdown done 00:05:44.556 06:30:39 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:44.556 Success 00:05:44.556 06:30:39 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:44.556 00:05:44.556 real 0m1.757s 00:05:44.556 user 0m1.597s 00:05:44.556 sys 0m0.322s 00:05:44.556 ************************************ 00:05:44.556 END TEST json_config_extra_key 00:05:44.556 ************************************ 00:05:44.556 06:30:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.556 06:30:39 -- common/autotest_common.sh@10 -- # set +x 00:05:44.556 06:30:39 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.556 06:30:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.556 06:30:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.556 06:30:39 -- common/autotest_common.sh@10 -- # set +x 00:05:44.556 ************************************ 00:05:44.556 START TEST alias_rpc 00:05:44.556 ************************************ 00:05:44.556 06:30:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.815 * Looking for test storage... 00:05:44.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:44.815 06:30:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:44.815 06:30:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:44.815 06:30:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:44.815 06:30:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:44.815 06:30:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:44.815 06:30:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:44.815 06:30:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:44.815 06:30:40 -- scripts/common.sh@335 -- # IFS=.-: 00:05:44.815 06:30:40 -- scripts/common.sh@335 -- # read -ra ver1 00:05:44.815 06:30:40 -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.815 06:30:40 -- scripts/common.sh@336 -- # read -ra ver2 00:05:44.815 06:30:40 -- scripts/common.sh@337 -- # local 'op=<' 00:05:44.815 06:30:40 -- scripts/common.sh@339 -- # ver1_l=2 00:05:44.815 06:30:40 -- scripts/common.sh@340 -- # ver2_l=1 00:05:44.815 06:30:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:44.815 06:30:40 -- scripts/common.sh@343 -- # case "$op" in 00:05:44.815 06:30:40 -- scripts/common.sh@344 -- # : 1 00:05:44.815 06:30:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:44.815 06:30:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.815 06:30:40 -- scripts/common.sh@364 -- # decimal 1 00:05:44.815 06:30:40 -- scripts/common.sh@352 -- # local d=1 00:05:44.815 06:30:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.815 06:30:40 -- scripts/common.sh@354 -- # echo 1 00:05:44.815 06:30:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:44.815 06:30:40 -- scripts/common.sh@365 -- # decimal 2 00:05:44.815 06:30:40 -- scripts/common.sh@352 -- # local d=2 00:05:44.815 06:30:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.815 06:30:40 -- scripts/common.sh@354 -- # echo 2 00:05:44.815 06:30:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:44.815 06:30:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:44.815 06:30:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:44.815 06:30:40 -- scripts/common.sh@367 -- # return 0 00:05:44.815 06:30:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.815 06:30:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:44.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.815 --rc genhtml_branch_coverage=1 00:05:44.815 --rc genhtml_function_coverage=1 00:05:44.815 --rc genhtml_legend=1 00:05:44.815 --rc geninfo_all_blocks=1 00:05:44.815 --rc geninfo_unexecuted_blocks=1 00:05:44.815 00:05:44.815 ' 00:05:44.815 06:30:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:44.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.815 --rc genhtml_branch_coverage=1 00:05:44.815 --rc genhtml_function_coverage=1 00:05:44.815 --rc genhtml_legend=1 00:05:44.815 --rc geninfo_all_blocks=1 00:05:44.815 --rc geninfo_unexecuted_blocks=1 00:05:44.815 00:05:44.815 ' 00:05:44.815 06:30:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:44.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.815 --rc genhtml_branch_coverage=1 00:05:44.815 --rc genhtml_function_coverage=1 00:05:44.815 --rc genhtml_legend=1 00:05:44.815 --rc geninfo_all_blocks=1 00:05:44.815 --rc geninfo_unexecuted_blocks=1 00:05:44.815 00:05:44.815 ' 00:05:44.815 06:30:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:44.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.815 --rc genhtml_branch_coverage=1 00:05:44.815 --rc genhtml_function_coverage=1 00:05:44.815 --rc genhtml_legend=1 00:05:44.815 --rc geninfo_all_blocks=1 00:05:44.815 --rc geninfo_unexecuted_blocks=1 00:05:44.815 00:05:44.815 ' 00:05:44.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.815 06:30:40 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:44.815 06:30:40 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66247 00:05:44.815 06:30:40 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66247 00:05:44.815 06:30:40 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.815 06:30:40 -- common/autotest_common.sh@829 -- # '[' -z 66247 ']' 00:05:44.815 06:30:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.815 06:30:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.815 06:30:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.815 06:30:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.815 06:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:44.815 [2024-12-05 06:30:40.242699] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:44.815 [2024-12-05 06:30:40.242980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66247 ] 00:05:45.074 [2024-12-05 06:30:40.381688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.075 [2024-12-05 06:30:40.420584] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.075 [2024-12-05 06:30:40.420983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.013 06:30:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.013 06:30:41 -- common/autotest_common.sh@862 -- # return 0 00:05:46.013 06:30:41 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:46.272 06:30:41 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66247 00:05:46.272 06:30:41 -- common/autotest_common.sh@936 -- # '[' -z 66247 ']' 00:05:46.272 06:30:41 -- common/autotest_common.sh@940 -- # kill -0 66247 00:05:46.272 06:30:41 -- common/autotest_common.sh@941 -- # uname 00:05:46.272 06:30:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.272 06:30:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66247 00:05:46.272 killing process with pid 66247 00:05:46.272 06:30:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.272 06:30:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.272 06:30:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66247' 00:05:46.272 06:30:41 -- common/autotest_common.sh@955 -- # kill 66247 00:05:46.272 06:30:41 -- common/autotest_common.sh@960 -- # wait 66247 00:05:46.532 ************************************ 00:05:46.532 END TEST alias_rpc 00:05:46.532 ************************************ 00:05:46.532 00:05:46.532 real 0m1.760s 00:05:46.532 user 0m2.107s 00:05:46.532 sys 0m0.359s 00:05:46.532 06:30:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.532 06:30:41 -- common/autotest_common.sh@10 -- # set +x 00:05:46.532 06:30:41 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:46.532 06:30:41 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:46.532 06:30:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.532 06:30:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.532 06:30:41 -- common/autotest_common.sh@10 -- # set +x 00:05:46.532 ************************************ 00:05:46.532 START TEST spdkcli_tcp 00:05:46.532 ************************************ 00:05:46.532 06:30:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:46.532 * Looking for test storage... 00:05:46.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:46.532 06:30:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:46.532 06:30:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:46.532 06:30:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:46.532 06:30:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:46.532 06:30:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:46.532 06:30:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:46.532 06:30:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:46.532 06:30:41 -- scripts/common.sh@335 -- # IFS=.-: 00:05:46.532 06:30:41 -- scripts/common.sh@335 -- # read -ra ver1 00:05:46.532 06:30:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.532 06:30:41 -- scripts/common.sh@336 -- # read -ra ver2 00:05:46.532 06:30:41 -- scripts/common.sh@337 -- # local 'op=<' 00:05:46.532 06:30:41 -- scripts/common.sh@339 -- # ver1_l=2 00:05:46.532 06:30:41 -- scripts/common.sh@340 -- # ver2_l=1 00:05:46.532 06:30:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:46.532 06:30:41 -- scripts/common.sh@343 -- # case "$op" in 00:05:46.532 06:30:41 -- scripts/common.sh@344 -- # : 1 00:05:46.532 06:30:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:46.532 06:30:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.532 06:30:41 -- scripts/common.sh@364 -- # decimal 1 00:05:46.532 06:30:41 -- scripts/common.sh@352 -- # local d=1 00:05:46.532 06:30:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.532 06:30:41 -- scripts/common.sh@354 -- # echo 1 00:05:46.532 06:30:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:46.532 06:30:41 -- scripts/common.sh@365 -- # decimal 2 00:05:46.532 06:30:41 -- scripts/common.sh@352 -- # local d=2 00:05:46.532 06:30:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.532 06:30:41 -- scripts/common.sh@354 -- # echo 2 00:05:46.532 06:30:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:46.532 06:30:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:46.532 06:30:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:46.532 06:30:41 -- scripts/common.sh@367 -- # return 0 00:05:46.532 06:30:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.791 06:30:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:46.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.791 --rc genhtml_branch_coverage=1 00:05:46.791 --rc genhtml_function_coverage=1 00:05:46.791 --rc genhtml_legend=1 00:05:46.791 --rc geninfo_all_blocks=1 00:05:46.791 --rc geninfo_unexecuted_blocks=1 00:05:46.791 00:05:46.791 ' 00:05:46.791 06:30:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:46.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.792 --rc genhtml_branch_coverage=1 00:05:46.792 --rc genhtml_function_coverage=1 00:05:46.792 --rc genhtml_legend=1 00:05:46.792 --rc geninfo_all_blocks=1 00:05:46.792 --rc geninfo_unexecuted_blocks=1 00:05:46.792 00:05:46.792 ' 00:05:46.792 06:30:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:46.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.792 --rc genhtml_branch_coverage=1 00:05:46.792 --rc genhtml_function_coverage=1 00:05:46.792 --rc genhtml_legend=1 00:05:46.792 --rc geninfo_all_blocks=1 00:05:46.792 --rc geninfo_unexecuted_blocks=1 00:05:46.792 00:05:46.792 ' 00:05:46.792 06:30:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:46.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.792 --rc genhtml_branch_coverage=1 00:05:46.792 --rc genhtml_function_coverage=1 00:05:46.792 --rc genhtml_legend=1 00:05:46.792 --rc geninfo_all_blocks=1 00:05:46.792 --rc geninfo_unexecuted_blocks=1 00:05:46.792 00:05:46.792 ' 00:05:46.792 06:30:41 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:46.792 06:30:41 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:46.792 06:30:41 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:46.792 06:30:41 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:46.792 06:30:41 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:46.792 06:30:41 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:46.792 06:30:41 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:46.792 06:30:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.792 06:30:41 -- common/autotest_common.sh@10 -- # set +x 00:05:46.792 06:30:42 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66330 00:05:46.792 06:30:42 -- spdkcli/tcp.sh@27 -- # waitforlisten 66330 00:05:46.792 06:30:42 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:46.792 06:30:42 -- common/autotest_common.sh@829 -- # '[' -z 66330 ']' 00:05:46.792 06:30:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.792 06:30:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.792 06:30:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.792 06:30:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.792 06:30:42 -- common/autotest_common.sh@10 -- # set +x 00:05:46.792 [2024-12-05 06:30:42.058845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:46.792 [2024-12-05 06:30:42.059155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66330 ] 00:05:46.792 [2024-12-05 06:30:42.196164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.792 [2024-12-05 06:30:42.228751] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.792 [2024-12-05 06:30:42.229473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.792 [2024-12-05 06:30:42.229489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.732 06:30:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.732 06:30:42 -- common/autotest_common.sh@862 -- # return 0 00:05:47.732 06:30:42 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:47.732 06:30:42 -- spdkcli/tcp.sh@31 -- # socat_pid=66347 00:05:47.732 06:30:42 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:47.992 [ 00:05:47.992 "bdev_malloc_delete", 00:05:47.992 "bdev_malloc_create", 00:05:47.992 "bdev_null_resize", 00:05:47.992 "bdev_null_delete", 00:05:47.992 "bdev_null_create", 00:05:47.992 "bdev_nvme_cuse_unregister", 00:05:47.992 "bdev_nvme_cuse_register", 00:05:47.993 "bdev_opal_new_user", 00:05:47.993 "bdev_opal_set_lock_state", 00:05:47.993 "bdev_opal_delete", 00:05:47.993 "bdev_opal_get_info", 00:05:47.993 "bdev_opal_create", 00:05:47.993 "bdev_nvme_opal_revert", 00:05:47.993 "bdev_nvme_opal_init", 00:05:47.993 "bdev_nvme_send_cmd", 00:05:47.993 "bdev_nvme_get_path_iostat", 00:05:47.993 "bdev_nvme_get_mdns_discovery_info", 00:05:47.993 "bdev_nvme_stop_mdns_discovery", 00:05:47.993 "bdev_nvme_start_mdns_discovery", 00:05:47.993 "bdev_nvme_set_multipath_policy", 00:05:47.993 "bdev_nvme_set_preferred_path", 00:05:47.993 "bdev_nvme_get_io_paths", 00:05:47.993 "bdev_nvme_remove_error_injection", 00:05:47.993 "bdev_nvme_add_error_injection", 00:05:47.993 "bdev_nvme_get_discovery_info", 00:05:47.993 "bdev_nvme_stop_discovery", 00:05:47.993 "bdev_nvme_start_discovery", 00:05:47.993 "bdev_nvme_get_controller_health_info", 00:05:47.993 "bdev_nvme_disable_controller", 00:05:47.993 "bdev_nvme_enable_controller", 00:05:47.993 "bdev_nvme_reset_controller", 00:05:47.993 "bdev_nvme_get_transport_statistics", 00:05:47.993 "bdev_nvme_apply_firmware", 00:05:47.993 "bdev_nvme_detach_controller", 00:05:47.993 "bdev_nvme_get_controllers", 00:05:47.993 "bdev_nvme_attach_controller", 00:05:47.993 "bdev_nvme_set_hotplug", 00:05:47.993 "bdev_nvme_set_options", 00:05:47.993 "bdev_passthru_delete", 00:05:47.993 "bdev_passthru_create", 00:05:47.993 "bdev_lvol_grow_lvstore", 00:05:47.993 "bdev_lvol_get_lvols", 00:05:47.993 "bdev_lvol_get_lvstores", 00:05:47.993 "bdev_lvol_delete", 00:05:47.993 "bdev_lvol_set_read_only", 00:05:47.993 "bdev_lvol_resize", 00:05:47.993 "bdev_lvol_decouple_parent", 00:05:47.993 "bdev_lvol_inflate", 00:05:47.993 "bdev_lvol_rename", 00:05:47.993 "bdev_lvol_clone_bdev", 00:05:47.993 "bdev_lvol_clone", 00:05:47.993 "bdev_lvol_snapshot", 00:05:47.993 "bdev_lvol_create", 00:05:47.993 "bdev_lvol_delete_lvstore", 00:05:47.993 "bdev_lvol_rename_lvstore", 00:05:47.993 "bdev_lvol_create_lvstore", 00:05:47.993 "bdev_raid_set_options", 00:05:47.993 "bdev_raid_remove_base_bdev", 00:05:47.993 "bdev_raid_add_base_bdev", 00:05:47.993 "bdev_raid_delete", 00:05:47.993 "bdev_raid_create", 00:05:47.993 "bdev_raid_get_bdevs", 00:05:47.993 "bdev_error_inject_error", 00:05:47.993 "bdev_error_delete", 00:05:47.993 "bdev_error_create", 00:05:47.993 "bdev_split_delete", 00:05:47.993 "bdev_split_create", 00:05:47.993 "bdev_delay_delete", 00:05:47.993 "bdev_delay_create", 00:05:47.993 "bdev_delay_update_latency", 00:05:47.993 "bdev_zone_block_delete", 00:05:47.993 "bdev_zone_block_create", 00:05:47.993 "blobfs_create", 00:05:47.993 "blobfs_detect", 00:05:47.993 "blobfs_set_cache_size", 00:05:47.993 "bdev_aio_delete", 00:05:47.993 "bdev_aio_rescan", 00:05:47.993 "bdev_aio_create", 00:05:47.993 "bdev_ftl_set_property", 00:05:47.993 "bdev_ftl_get_properties", 00:05:47.993 "bdev_ftl_get_stats", 00:05:47.993 "bdev_ftl_unmap", 00:05:47.993 "bdev_ftl_unload", 00:05:47.993 "bdev_ftl_delete", 00:05:47.993 "bdev_ftl_load", 00:05:47.993 "bdev_ftl_create", 00:05:47.993 "bdev_virtio_attach_controller", 00:05:47.993 "bdev_virtio_scsi_get_devices", 00:05:47.993 "bdev_virtio_detach_controller", 00:05:47.993 "bdev_virtio_blk_set_hotplug", 00:05:47.993 "bdev_iscsi_delete", 00:05:47.993 "bdev_iscsi_create", 00:05:47.993 "bdev_iscsi_set_options", 00:05:47.993 "bdev_uring_delete", 00:05:47.993 "bdev_uring_create", 00:05:47.993 "accel_error_inject_error", 00:05:47.993 "ioat_scan_accel_module", 00:05:47.993 "dsa_scan_accel_module", 00:05:47.993 "iaa_scan_accel_module", 00:05:47.993 "iscsi_set_options", 00:05:47.993 "iscsi_get_auth_groups", 00:05:47.993 "iscsi_auth_group_remove_secret", 00:05:47.993 "iscsi_auth_group_add_secret", 00:05:47.993 "iscsi_delete_auth_group", 00:05:47.993 "iscsi_create_auth_group", 00:05:47.993 "iscsi_set_discovery_auth", 00:05:47.993 "iscsi_get_options", 00:05:47.993 "iscsi_target_node_request_logout", 00:05:47.993 "iscsi_target_node_set_redirect", 00:05:47.993 "iscsi_target_node_set_auth", 00:05:47.993 "iscsi_target_node_add_lun", 00:05:47.993 "iscsi_get_connections", 00:05:47.993 "iscsi_portal_group_set_auth", 00:05:47.993 "iscsi_start_portal_group", 00:05:47.993 "iscsi_delete_portal_group", 00:05:47.993 "iscsi_create_portal_group", 00:05:47.993 "iscsi_get_portal_groups", 00:05:47.993 "iscsi_delete_target_node", 00:05:47.993 "iscsi_target_node_remove_pg_ig_maps", 00:05:47.993 "iscsi_target_node_add_pg_ig_maps", 00:05:47.993 "iscsi_create_target_node", 00:05:47.993 "iscsi_get_target_nodes", 00:05:47.993 "iscsi_delete_initiator_group", 00:05:47.993 "iscsi_initiator_group_remove_initiators", 00:05:47.993 "iscsi_initiator_group_add_initiators", 00:05:47.993 "iscsi_create_initiator_group", 00:05:47.993 "iscsi_get_initiator_groups", 00:05:47.993 "nvmf_set_crdt", 00:05:47.993 "nvmf_set_config", 00:05:47.993 "nvmf_set_max_subsystems", 00:05:47.993 "nvmf_subsystem_get_listeners", 00:05:47.993 "nvmf_subsystem_get_qpairs", 00:05:47.993 "nvmf_subsystem_get_controllers", 00:05:47.993 "nvmf_get_stats", 00:05:47.993 "nvmf_get_transports", 00:05:47.993 "nvmf_create_transport", 00:05:47.993 "nvmf_get_targets", 00:05:47.993 "nvmf_delete_target", 00:05:47.993 "nvmf_create_target", 00:05:47.993 "nvmf_subsystem_allow_any_host", 00:05:47.993 "nvmf_subsystem_remove_host", 00:05:47.993 "nvmf_subsystem_add_host", 00:05:47.993 "nvmf_subsystem_remove_ns", 00:05:47.993 "nvmf_subsystem_add_ns", 00:05:47.993 "nvmf_subsystem_listener_set_ana_state", 00:05:47.993 "nvmf_discovery_get_referrals", 00:05:47.993 "nvmf_discovery_remove_referral", 00:05:47.993 "nvmf_discovery_add_referral", 00:05:47.993 "nvmf_subsystem_remove_listener", 00:05:47.993 "nvmf_subsystem_add_listener", 00:05:47.993 "nvmf_delete_subsystem", 00:05:47.993 "nvmf_create_subsystem", 00:05:47.993 "nvmf_get_subsystems", 00:05:47.993 "env_dpdk_get_mem_stats", 00:05:47.993 "nbd_get_disks", 00:05:47.993 "nbd_stop_disk", 00:05:47.993 "nbd_start_disk", 00:05:47.993 "ublk_recover_disk", 00:05:47.993 "ublk_get_disks", 00:05:47.993 "ublk_stop_disk", 00:05:47.993 "ublk_start_disk", 00:05:47.993 "ublk_destroy_target", 00:05:47.993 "ublk_create_target", 00:05:47.993 "virtio_blk_create_transport", 00:05:47.993 "virtio_blk_get_transports", 00:05:47.993 "vhost_controller_set_coalescing", 00:05:47.993 "vhost_get_controllers", 00:05:47.993 "vhost_delete_controller", 00:05:47.993 "vhost_create_blk_controller", 00:05:47.993 "vhost_scsi_controller_remove_target", 00:05:47.993 "vhost_scsi_controller_add_target", 00:05:47.993 "vhost_start_scsi_controller", 00:05:47.993 "vhost_create_scsi_controller", 00:05:47.993 "thread_set_cpumask", 00:05:47.993 "framework_get_scheduler", 00:05:47.993 "framework_set_scheduler", 00:05:47.993 "framework_get_reactors", 00:05:47.993 "thread_get_io_channels", 00:05:47.993 "thread_get_pollers", 00:05:47.993 "thread_get_stats", 00:05:47.993 "framework_monitor_context_switch", 00:05:47.993 "spdk_kill_instance", 00:05:47.993 "log_enable_timestamps", 00:05:47.993 "log_get_flags", 00:05:47.993 "log_clear_flag", 00:05:47.993 "log_set_flag", 00:05:47.993 "log_get_level", 00:05:47.993 "log_set_level", 00:05:47.993 "log_get_print_level", 00:05:47.993 "log_set_print_level", 00:05:47.993 "framework_enable_cpumask_locks", 00:05:47.993 "framework_disable_cpumask_locks", 00:05:47.993 "framework_wait_init", 00:05:47.993 "framework_start_init", 00:05:47.993 "scsi_get_devices", 00:05:47.993 "bdev_get_histogram", 00:05:47.993 "bdev_enable_histogram", 00:05:47.993 "bdev_set_qos_limit", 00:05:47.993 "bdev_set_qd_sampling_period", 00:05:47.993 "bdev_get_bdevs", 00:05:47.993 "bdev_reset_iostat", 00:05:47.993 "bdev_get_iostat", 00:05:47.993 "bdev_examine", 00:05:47.993 "bdev_wait_for_examine", 00:05:47.993 "bdev_set_options", 00:05:47.993 "notify_get_notifications", 00:05:47.993 "notify_get_types", 00:05:47.993 "accel_get_stats", 00:05:47.993 "accel_set_options", 00:05:47.993 "accel_set_driver", 00:05:47.993 "accel_crypto_key_destroy", 00:05:47.993 "accel_crypto_keys_get", 00:05:47.993 "accel_crypto_key_create", 00:05:47.993 "accel_assign_opc", 00:05:47.993 "accel_get_module_info", 00:05:47.993 "accel_get_opc_assignments", 00:05:47.993 "vmd_rescan", 00:05:47.993 "vmd_remove_device", 00:05:47.993 "vmd_enable", 00:05:47.993 "sock_set_default_impl", 00:05:47.993 "sock_impl_set_options", 00:05:47.993 "sock_impl_get_options", 00:05:47.993 "iobuf_get_stats", 00:05:47.993 "iobuf_set_options", 00:05:47.993 "framework_get_pci_devices", 00:05:47.993 "framework_get_config", 00:05:47.993 "framework_get_subsystems", 00:05:47.993 "trace_get_info", 00:05:47.993 "trace_get_tpoint_group_mask", 00:05:47.993 "trace_disable_tpoint_group", 00:05:47.993 "trace_enable_tpoint_group", 00:05:47.993 "trace_clear_tpoint_mask", 00:05:47.993 "trace_set_tpoint_mask", 00:05:47.993 "spdk_get_version", 00:05:47.993 "rpc_get_methods" 00:05:47.993 ] 00:05:47.993 06:30:43 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:47.993 06:30:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.993 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:05:47.993 06:30:43 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:47.993 06:30:43 -- spdkcli/tcp.sh@38 -- # killprocess 66330 00:05:47.994 06:30:43 -- common/autotest_common.sh@936 -- # '[' -z 66330 ']' 00:05:47.994 06:30:43 -- common/autotest_common.sh@940 -- # kill -0 66330 00:05:47.994 06:30:43 -- common/autotest_common.sh@941 -- # uname 00:05:47.994 06:30:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:47.994 06:30:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66330 00:05:47.994 killing process with pid 66330 00:05:47.994 06:30:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:47.994 06:30:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:47.994 06:30:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66330' 00:05:47.994 06:30:43 -- common/autotest_common.sh@955 -- # kill 66330 00:05:47.994 06:30:43 -- common/autotest_common.sh@960 -- # wait 66330 00:05:48.253 ************************************ 00:05:48.253 END TEST spdkcli_tcp 00:05:48.253 ************************************ 00:05:48.253 00:05:48.253 real 0m1.716s 00:05:48.253 user 0m3.257s 00:05:48.253 sys 0m0.374s 00:05:48.253 06:30:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.253 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:05:48.253 06:30:43 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:48.253 06:30:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.253 06:30:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.253 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:05:48.253 ************************************ 00:05:48.253 START TEST dpdk_mem_utility 00:05:48.253 ************************************ 00:05:48.253 06:30:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:48.253 * Looking for test storage... 00:05:48.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:48.253 06:30:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:48.253 06:30:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:48.253 06:30:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:48.512 06:30:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:48.512 06:30:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:48.512 06:30:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:48.512 06:30:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:48.512 06:30:43 -- scripts/common.sh@335 -- # IFS=.-: 00:05:48.512 06:30:43 -- scripts/common.sh@335 -- # read -ra ver1 00:05:48.512 06:30:43 -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.512 06:30:43 -- scripts/common.sh@336 -- # read -ra ver2 00:05:48.512 06:30:43 -- scripts/common.sh@337 -- # local 'op=<' 00:05:48.512 06:30:43 -- scripts/common.sh@339 -- # ver1_l=2 00:05:48.512 06:30:43 -- scripts/common.sh@340 -- # ver2_l=1 00:05:48.512 06:30:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:48.512 06:30:43 -- scripts/common.sh@343 -- # case "$op" in 00:05:48.512 06:30:43 -- scripts/common.sh@344 -- # : 1 00:05:48.512 06:30:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:48.512 06:30:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.512 06:30:43 -- scripts/common.sh@364 -- # decimal 1 00:05:48.512 06:30:43 -- scripts/common.sh@352 -- # local d=1 00:05:48.512 06:30:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.512 06:30:43 -- scripts/common.sh@354 -- # echo 1 00:05:48.512 06:30:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:48.512 06:30:43 -- scripts/common.sh@365 -- # decimal 2 00:05:48.512 06:30:43 -- scripts/common.sh@352 -- # local d=2 00:05:48.512 06:30:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.512 06:30:43 -- scripts/common.sh@354 -- # echo 2 00:05:48.512 06:30:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:48.512 06:30:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:48.512 06:30:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:48.512 06:30:43 -- scripts/common.sh@367 -- # return 0 00:05:48.512 06:30:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.512 06:30:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:48.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.512 --rc genhtml_branch_coverage=1 00:05:48.512 --rc genhtml_function_coverage=1 00:05:48.512 --rc genhtml_legend=1 00:05:48.512 --rc geninfo_all_blocks=1 00:05:48.512 --rc geninfo_unexecuted_blocks=1 00:05:48.512 00:05:48.512 ' 00:05:48.512 06:30:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:48.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.512 --rc genhtml_branch_coverage=1 00:05:48.512 --rc genhtml_function_coverage=1 00:05:48.512 --rc genhtml_legend=1 00:05:48.512 --rc geninfo_all_blocks=1 00:05:48.512 --rc geninfo_unexecuted_blocks=1 00:05:48.512 00:05:48.512 ' 00:05:48.512 06:30:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:48.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.512 --rc genhtml_branch_coverage=1 00:05:48.512 --rc genhtml_function_coverage=1 00:05:48.512 --rc genhtml_legend=1 00:05:48.512 --rc geninfo_all_blocks=1 00:05:48.512 --rc geninfo_unexecuted_blocks=1 00:05:48.512 00:05:48.512 ' 00:05:48.512 06:30:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:48.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.512 --rc genhtml_branch_coverage=1 00:05:48.512 --rc genhtml_function_coverage=1 00:05:48.512 --rc genhtml_legend=1 00:05:48.512 --rc geninfo_all_blocks=1 00:05:48.512 --rc geninfo_unexecuted_blocks=1 00:05:48.512 00:05:48.512 ' 00:05:48.512 06:30:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:48.512 06:30:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66428 00:05:48.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.512 06:30:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66428 00:05:48.512 06:30:43 -- common/autotest_common.sh@829 -- # '[' -z 66428 ']' 00:05:48.512 06:30:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.512 06:30:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.512 06:30:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.512 06:30:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.512 06:30:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.513 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:05:48.513 [2024-12-05 06:30:43.815410] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:48.513 [2024-12-05 06:30:43.815741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66428 ] 00:05:48.513 [2024-12-05 06:30:43.947738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.771 [2024-12-05 06:30:43.982689] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.771 [2024-12-05 06:30:43.982846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.355 06:30:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.355 06:30:44 -- common/autotest_common.sh@862 -- # return 0 00:05:49.355 06:30:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:49.355 06:30:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:49.355 06:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.355 06:30:44 -- common/autotest_common.sh@10 -- # set +x 00:05:49.355 { 00:05:49.355 "filename": "/tmp/spdk_mem_dump.txt" 00:05:49.355 } 00:05:49.355 06:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.355 06:30:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:49.632 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:49.632 1 heaps totaling size 814.000000 MiB 00:05:49.632 size: 814.000000 MiB heap id: 0 00:05:49.632 end heaps---------- 00:05:49.632 8 mempools totaling size 598.116089 MiB 00:05:49.632 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:49.632 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:49.632 size: 84.521057 MiB name: bdev_io_66428 00:05:49.632 size: 51.011292 MiB name: evtpool_66428 00:05:49.632 size: 50.003479 MiB name: msgpool_66428 00:05:49.632 size: 21.763794 MiB name: PDU_Pool 00:05:49.632 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:49.632 size: 0.026123 MiB name: Session_Pool 00:05:49.632 end mempools------- 00:05:49.632 6 memzones totaling size 4.142822 MiB 00:05:49.632 size: 1.000366 MiB name: RG_ring_0_66428 00:05:49.632 size: 1.000366 MiB name: RG_ring_1_66428 00:05:49.632 size: 1.000366 MiB name: RG_ring_4_66428 00:05:49.632 size: 1.000366 MiB name: RG_ring_5_66428 00:05:49.632 size: 0.125366 MiB name: RG_ring_2_66428 00:05:49.632 size: 0.015991 MiB name: RG_ring_3_66428 00:05:49.632 end memzones------- 00:05:49.632 06:30:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:49.632 heap id: 0 total size: 814.000000 MiB number of busy elements: 308 number of free elements: 15 00:05:49.632 list of free elements. size: 12.470459 MiB 00:05:49.632 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:49.632 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:49.632 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:49.632 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:49.632 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:49.632 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:49.633 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:49.633 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:49.633 element at address: 0x200000200000 with size: 0.832825 MiB 00:05:49.633 element at address: 0x20001aa00000 with size: 0.568237 MiB 00:05:49.633 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:49.633 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:49.633 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:49.633 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:49.633 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:49.633 list of standard malloc elements. size: 199.266968 MiB 00:05:49.633 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:49.633 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:49.633 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:49.633 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:49.633 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:49.633 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:49.633 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:49.633 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:49.633 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:49.633 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:49.633 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:49.633 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:49.633 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:49.633 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:49.633 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:49.633 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:49.633 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:49.633 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:49.633 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:49.633 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:49.634 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:49.634 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:49.634 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:49.634 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:49.634 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:49.635 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:49.635 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:49.636 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:49.636 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:49.636 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:49.636 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:49.636 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:49.636 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:49.636 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:49.636 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:49.636 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:49.636 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:49.636 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:49.636 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:49.636 list of memzone associated elements. size: 602.262573 MiB 00:05:49.636 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:49.636 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:49.636 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:49.636 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:49.636 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:49.636 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66428_0 00:05:49.636 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:49.636 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66428_0 00:05:49.636 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:49.636 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66428_0 00:05:49.636 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:49.636 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:49.636 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:49.636 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:49.636 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:49.636 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66428 00:05:49.636 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:49.636 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66428 00:05:49.636 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:49.636 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66428 00:05:49.636 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:49.636 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:49.636 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:49.636 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:49.636 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:49.636 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:49.636 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:49.636 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:49.636 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:49.636 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66428 00:05:49.636 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:49.636 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66428 00:05:49.636 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:49.636 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66428 00:05:49.636 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:49.636 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66428 00:05:49.636 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:49.636 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66428 00:05:49.636 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:49.636 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:49.636 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:49.636 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:49.636 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:49.636 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:49.636 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:49.636 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66428 00:05:49.636 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:49.636 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:49.636 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:49.636 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:49.636 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:49.636 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66428 00:05:49.636 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:49.636 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:49.636 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:49.636 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66428 00:05:49.636 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:49.636 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66428 00:05:49.636 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:49.636 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:49.636 06:30:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:49.636 06:30:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66428 00:05:49.636 06:30:44 -- common/autotest_common.sh@936 -- # '[' -z 66428 ']' 00:05:49.636 06:30:44 -- common/autotest_common.sh@940 -- # kill -0 66428 00:05:49.636 06:30:44 -- common/autotest_common.sh@941 -- # uname 00:05:49.636 06:30:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:49.636 06:30:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66428 00:05:49.636 killing process with pid 66428 00:05:49.636 06:30:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:49.636 06:30:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:49.636 06:30:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66428' 00:05:49.636 06:30:44 -- common/autotest_common.sh@955 -- # kill 66428 00:05:49.636 06:30:44 -- common/autotest_common.sh@960 -- # wait 66428 00:05:49.904 00:05:49.904 real 0m1.587s 00:05:49.904 user 0m1.770s 00:05:49.904 sys 0m0.349s 00:05:49.904 06:30:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.904 06:30:45 -- common/autotest_common.sh@10 -- # set +x 00:05:49.904 ************************************ 00:05:49.904 END TEST dpdk_mem_utility 00:05:49.904 ************************************ 00:05:49.904 06:30:45 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:49.904 06:30:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.904 06:30:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.904 06:30:45 -- common/autotest_common.sh@10 -- # set +x 00:05:49.904 ************************************ 00:05:49.904 START TEST event 00:05:49.904 ************************************ 00:05:49.904 06:30:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:49.904 * Looking for test storage... 00:05:49.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:49.904 06:30:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:49.904 06:30:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:49.904 06:30:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:50.162 06:30:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:50.162 06:30:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:50.162 06:30:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:50.162 06:30:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:50.162 06:30:45 -- scripts/common.sh@335 -- # IFS=.-: 00:05:50.162 06:30:45 -- scripts/common.sh@335 -- # read -ra ver1 00:05:50.162 06:30:45 -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.162 06:30:45 -- scripts/common.sh@336 -- # read -ra ver2 00:05:50.162 06:30:45 -- scripts/common.sh@337 -- # local 'op=<' 00:05:50.162 06:30:45 -- scripts/common.sh@339 -- # ver1_l=2 00:05:50.162 06:30:45 -- scripts/common.sh@340 -- # ver2_l=1 00:05:50.162 06:30:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:50.162 06:30:45 -- scripts/common.sh@343 -- # case "$op" in 00:05:50.162 06:30:45 -- scripts/common.sh@344 -- # : 1 00:05:50.162 06:30:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:50.162 06:30:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.162 06:30:45 -- scripts/common.sh@364 -- # decimal 1 00:05:50.162 06:30:45 -- scripts/common.sh@352 -- # local d=1 00:05:50.162 06:30:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.162 06:30:45 -- scripts/common.sh@354 -- # echo 1 00:05:50.162 06:30:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:50.162 06:30:45 -- scripts/common.sh@365 -- # decimal 2 00:05:50.162 06:30:45 -- scripts/common.sh@352 -- # local d=2 00:05:50.162 06:30:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.162 06:30:45 -- scripts/common.sh@354 -- # echo 2 00:05:50.162 06:30:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:50.162 06:30:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:50.162 06:30:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:50.162 06:30:45 -- scripts/common.sh@367 -- # return 0 00:05:50.162 06:30:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.162 06:30:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:50.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.162 --rc genhtml_branch_coverage=1 00:05:50.162 --rc genhtml_function_coverage=1 00:05:50.162 --rc genhtml_legend=1 00:05:50.162 --rc geninfo_all_blocks=1 00:05:50.162 --rc geninfo_unexecuted_blocks=1 00:05:50.162 00:05:50.162 ' 00:05:50.162 06:30:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:50.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.162 --rc genhtml_branch_coverage=1 00:05:50.162 --rc genhtml_function_coverage=1 00:05:50.162 --rc genhtml_legend=1 00:05:50.162 --rc geninfo_all_blocks=1 00:05:50.162 --rc geninfo_unexecuted_blocks=1 00:05:50.162 00:05:50.162 ' 00:05:50.162 06:30:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:50.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.162 --rc genhtml_branch_coverage=1 00:05:50.162 --rc genhtml_function_coverage=1 00:05:50.162 --rc genhtml_legend=1 00:05:50.162 --rc geninfo_all_blocks=1 00:05:50.162 --rc geninfo_unexecuted_blocks=1 00:05:50.162 00:05:50.162 ' 00:05:50.162 06:30:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:50.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.162 --rc genhtml_branch_coverage=1 00:05:50.162 --rc genhtml_function_coverage=1 00:05:50.162 --rc genhtml_legend=1 00:05:50.162 --rc geninfo_all_blocks=1 00:05:50.162 --rc geninfo_unexecuted_blocks=1 00:05:50.162 00:05:50.162 ' 00:05:50.162 06:30:45 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:50.162 06:30:45 -- bdev/nbd_common.sh@6 -- # set -e 00:05:50.162 06:30:45 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:50.162 06:30:45 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:50.162 06:30:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.162 06:30:45 -- common/autotest_common.sh@10 -- # set +x 00:05:50.162 ************************************ 00:05:50.162 START TEST event_perf 00:05:50.162 ************************************ 00:05:50.162 06:30:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:50.162 Running I/O for 1 seconds...[2024-12-05 06:30:45.423691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:50.162 [2024-12-05 06:30:45.423928] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66501 ] 00:05:50.162 [2024-12-05 06:30:45.553956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.162 [2024-12-05 06:30:45.593924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.162 [2024-12-05 06:30:45.594081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.162 [2024-12-05 06:30:45.594160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.162 [2024-12-05 06:30:45.594162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.556 Running I/O for 1 seconds... 00:05:51.556 lcore 0: 187006 00:05:51.556 lcore 1: 187004 00:05:51.556 lcore 2: 187006 00:05:51.556 lcore 3: 187008 00:05:51.556 done. 00:05:51.556 ************************************ 00:05:51.556 END TEST event_perf 00:05:51.556 ************************************ 00:05:51.556 00:05:51.556 real 0m1.243s 00:05:51.556 user 0m4.077s 00:05:51.556 sys 0m0.047s 00:05:51.556 06:30:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.556 06:30:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.556 06:30:46 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:51.556 06:30:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:51.556 06:30:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.556 06:30:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.556 ************************************ 00:05:51.556 START TEST event_reactor 00:05:51.556 ************************************ 00:05:51.556 06:30:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:51.556 [2024-12-05 06:30:46.714843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:51.556 [2024-12-05 06:30:46.715181] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66545 ] 00:05:51.556 [2024-12-05 06:30:46.846301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.556 [2024-12-05 06:30:46.887099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.491 test_start 00:05:52.491 oneshot 00:05:52.491 tick 100 00:05:52.491 tick 100 00:05:52.491 tick 250 00:05:52.491 tick 100 00:05:52.491 tick 100 00:05:52.491 tick 100 00:05:52.491 tick 500 00:05:52.491 tick 250 00:05:52.491 tick 100 00:05:52.491 tick 100 00:05:52.491 tick 250 00:05:52.491 tick 100 00:05:52.491 tick 100 00:05:52.491 test_end 00:05:52.491 ************************************ 00:05:52.491 END TEST event_reactor 00:05:52.491 ************************************ 00:05:52.491 00:05:52.491 real 0m1.238s 00:05:52.491 user 0m1.088s 00:05:52.491 sys 0m0.043s 00:05:52.491 06:30:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.491 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.750 06:30:47 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.750 06:30:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:52.750 06:30:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.750 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.750 ************************************ 00:05:52.750 START TEST event_reactor_perf 00:05:52.750 ************************************ 00:05:52.750 06:30:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.750 [2024-12-05 06:30:48.007656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:52.750 [2024-12-05 06:30:48.007745] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66575 ] 00:05:52.750 [2024-12-05 06:30:48.143719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.750 [2024-12-05 06:30:48.181065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.130 test_start 00:05:54.130 test_end 00:05:54.130 Performance: 416107 events per second 00:05:54.130 00:05:54.130 real 0m1.244s 00:05:54.130 user 0m1.096s 00:05:54.130 sys 0m0.040s 00:05:54.130 06:30:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.130 ************************************ 00:05:54.130 END TEST event_reactor_perf 00:05:54.130 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.130 ************************************ 00:05:54.130 06:30:49 -- event/event.sh@49 -- # uname -s 00:05:54.130 06:30:49 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:54.130 06:30:49 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:54.130 06:30:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.130 06:30:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.130 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.130 ************************************ 00:05:54.130 START TEST event_scheduler 00:05:54.130 ************************************ 00:05:54.130 06:30:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:54.130 * Looking for test storage... 00:05:54.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:54.130 06:30:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:54.130 06:30:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:54.130 06:30:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:54.130 06:30:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:54.130 06:30:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:54.130 06:30:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:54.130 06:30:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:54.130 06:30:49 -- scripts/common.sh@335 -- # IFS=.-: 00:05:54.130 06:30:49 -- scripts/common.sh@335 -- # read -ra ver1 00:05:54.130 06:30:49 -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.130 06:30:49 -- scripts/common.sh@336 -- # read -ra ver2 00:05:54.130 06:30:49 -- scripts/common.sh@337 -- # local 'op=<' 00:05:54.130 06:30:49 -- scripts/common.sh@339 -- # ver1_l=2 00:05:54.130 06:30:49 -- scripts/common.sh@340 -- # ver2_l=1 00:05:54.130 06:30:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:54.130 06:30:49 -- scripts/common.sh@343 -- # case "$op" in 00:05:54.130 06:30:49 -- scripts/common.sh@344 -- # : 1 00:05:54.130 06:30:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:54.130 06:30:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.130 06:30:49 -- scripts/common.sh@364 -- # decimal 1 00:05:54.130 06:30:49 -- scripts/common.sh@352 -- # local d=1 00:05:54.130 06:30:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.130 06:30:49 -- scripts/common.sh@354 -- # echo 1 00:05:54.130 06:30:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:54.130 06:30:49 -- scripts/common.sh@365 -- # decimal 2 00:05:54.130 06:30:49 -- scripts/common.sh@352 -- # local d=2 00:05:54.130 06:30:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.130 06:30:49 -- scripts/common.sh@354 -- # echo 2 00:05:54.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.130 06:30:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:54.130 06:30:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:54.130 06:30:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:54.130 06:30:49 -- scripts/common.sh@367 -- # return 0 00:05:54.130 06:30:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.130 06:30:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:54.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.130 --rc genhtml_branch_coverage=1 00:05:54.130 --rc genhtml_function_coverage=1 00:05:54.130 --rc genhtml_legend=1 00:05:54.130 --rc geninfo_all_blocks=1 00:05:54.130 --rc geninfo_unexecuted_blocks=1 00:05:54.130 00:05:54.130 ' 00:05:54.130 06:30:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:54.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.130 --rc genhtml_branch_coverage=1 00:05:54.130 --rc genhtml_function_coverage=1 00:05:54.130 --rc genhtml_legend=1 00:05:54.130 --rc geninfo_all_blocks=1 00:05:54.130 --rc geninfo_unexecuted_blocks=1 00:05:54.130 00:05:54.130 ' 00:05:54.130 06:30:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:54.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.130 --rc genhtml_branch_coverage=1 00:05:54.130 --rc genhtml_function_coverage=1 00:05:54.130 --rc genhtml_legend=1 00:05:54.130 --rc geninfo_all_blocks=1 00:05:54.130 --rc geninfo_unexecuted_blocks=1 00:05:54.130 00:05:54.130 ' 00:05:54.130 06:30:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:54.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.130 --rc genhtml_branch_coverage=1 00:05:54.130 --rc genhtml_function_coverage=1 00:05:54.130 --rc genhtml_legend=1 00:05:54.130 --rc geninfo_all_blocks=1 00:05:54.130 --rc geninfo_unexecuted_blocks=1 00:05:54.130 00:05:54.130 ' 00:05:54.130 06:30:49 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:54.130 06:30:49 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66638 00:05:54.130 06:30:49 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.130 06:30:49 -- scheduler/scheduler.sh@37 -- # waitforlisten 66638 00:05:54.130 06:30:49 -- common/autotest_common.sh@829 -- # '[' -z 66638 ']' 00:05:54.130 06:30:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.130 06:30:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.130 06:30:49 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:54.130 06:30:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.130 06:30:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.130 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.130 [2024-12-05 06:30:49.504643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:54.130 [2024-12-05 06:30:49.504741] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66638 ] 00:05:54.390 [2024-12-05 06:30:49.640623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.390 [2024-12-05 06:30:49.684560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.390 [2024-12-05 06:30:49.684816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.390 [2024-12-05 06:30:49.684657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.390 [2024-12-05 06:30:49.685624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.390 06:30:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.390 06:30:49 -- common/autotest_common.sh@862 -- # return 0 00:05:54.390 06:30:49 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:54.390 06:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.390 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.390 POWER: Env isn't set yet! 00:05:54.390 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:54.390 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.390 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.390 POWER: Attempting to initialise PSTAT power management... 00:05:54.390 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.390 POWER: Cannot set governor of lcore 0 to performance 00:05:54.390 POWER: Attempting to initialise CPPC power management... 00:05:54.390 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.390 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.390 POWER: Attempting to initialise VM power management... 00:05:54.390 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:54.390 POWER: Unable to set Power Management Environment for lcore 0 00:05:54.390 [2024-12-05 06:30:49.773228] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:54.390 [2024-12-05 06:30:49.773470] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:54.390 [2024-12-05 06:30:49.773715] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:54.390 [2024-12-05 06:30:49.773929] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:54.390 [2024-12-05 06:30:49.774149] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:54.390 [2024-12-05 06:30:49.774420] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:54.390 06:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.390 06:30:49 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:54.390 06:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.390 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.390 [2024-12-05 06:30:49.827184] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:54.390 06:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.390 06:30:49 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:54.390 06:30:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.390 06:30:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.390 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.390 ************************************ 00:05:54.390 START TEST scheduler_create_thread 00:05:54.390 ************************************ 00:05:54.390 06:30:49 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:54.390 06:30:49 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:54.390 06:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.390 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.390 2 00:05:54.390 06:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.390 06:30:49 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:54.390 06:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.390 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.650 3 00:05:54.650 06:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.650 06:30:49 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:54.650 06:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.650 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.650 4 00:05:54.650 06:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.650 06:30:49 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:54.650 06:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.650 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.650 5 00:05:54.650 06:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.650 06:30:49 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:54.650 06:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.650 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.650 6 00:05:54.650 06:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.650 06:30:49 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:54.650 06:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.650 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.650 7 00:05:54.650 06:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.650 06:30:49 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:54.650 06:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.650 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.650 8 00:05:54.650 06:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.650 06:30:49 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:54.650 06:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.650 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.650 9 00:05:54.650 06:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.650 06:30:49 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:54.650 06:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.650 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.650 10 00:05:54.650 06:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.650 06:30:49 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:54.650 06:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.650 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.650 06:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.650 06:30:49 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:54.650 06:30:49 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:54.650 06:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.650 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.650 06:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.650 06:30:49 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:54.650 06:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.650 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.650 06:30:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.650 06:30:49 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:54.650 06:30:49 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:54.650 06:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.650 06:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:55.218 ************************************ 00:05:55.218 END TEST scheduler_create_thread 00:05:55.218 ************************************ 00:05:55.218 06:30:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.218 00:05:55.218 real 0m0.592s 00:05:55.218 user 0m0.010s 00:05:55.218 sys 0m0.005s 00:05:55.218 06:30:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.218 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:05:55.218 06:30:50 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:55.218 06:30:50 -- scheduler/scheduler.sh@46 -- # killprocess 66638 00:05:55.218 06:30:50 -- common/autotest_common.sh@936 -- # '[' -z 66638 ']' 00:05:55.218 06:30:50 -- common/autotest_common.sh@940 -- # kill -0 66638 00:05:55.218 06:30:50 -- common/autotest_common.sh@941 -- # uname 00:05:55.218 06:30:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:55.218 06:30:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66638 00:05:55.218 killing process with pid 66638 00:05:55.218 06:30:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:55.218 06:30:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:55.218 06:30:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66638' 00:05:55.218 06:30:50 -- common/autotest_common.sh@955 -- # kill 66638 00:05:55.218 06:30:50 -- common/autotest_common.sh@960 -- # wait 66638 00:05:55.477 [2024-12-05 06:30:50.909020] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:55.737 ************************************ 00:05:55.737 END TEST event_scheduler 00:05:55.737 ************************************ 00:05:55.737 00:05:55.737 real 0m1.756s 00:05:55.737 user 0m2.187s 00:05:55.737 sys 0m0.288s 00:05:55.737 06:30:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.737 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:05:55.737 06:30:51 -- event/event.sh@51 -- # modprobe -n nbd 00:05:55.737 06:30:51 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:55.737 06:30:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.737 06:30:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.737 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:05:55.737 ************************************ 00:05:55.737 START TEST app_repeat 00:05:55.737 ************************************ 00:05:55.737 06:30:51 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:55.737 06:30:51 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.737 06:30:51 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.737 06:30:51 -- event/event.sh@13 -- # local nbd_list 00:05:55.737 06:30:51 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.737 06:30:51 -- event/event.sh@14 -- # local bdev_list 00:05:55.737 06:30:51 -- event/event.sh@15 -- # local repeat_times=4 00:05:55.737 06:30:51 -- event/event.sh@17 -- # modprobe nbd 00:05:55.737 Process app_repeat pid: 66708 00:05:55.737 spdk_app_start Round 0 00:05:55.737 06:30:51 -- event/event.sh@19 -- # repeat_pid=66708 00:05:55.737 06:30:51 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.737 06:30:51 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:55.737 06:30:51 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 66708' 00:05:55.737 06:30:51 -- event/event.sh@23 -- # for i in {0..2} 00:05:55.737 06:30:51 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:55.737 06:30:51 -- event/event.sh@25 -- # waitforlisten 66708 /var/tmp/spdk-nbd.sock 00:05:55.737 06:30:51 -- common/autotest_common.sh@829 -- # '[' -z 66708 ']' 00:05:55.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.737 06:30:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.737 06:30:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.737 06:30:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.737 06:30:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.737 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:05:55.737 [2024-12-05 06:30:51.136817] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:55.737 [2024-12-05 06:30:51.137546] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66708 ] 00:05:55.996 [2024-12-05 06:30:51.282328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.996 [2024-12-05 06:30:51.326355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.996 [2024-12-05 06:30:51.326392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.996 06:30:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.996 06:30:51 -- common/autotest_common.sh@862 -- # return 0 00:05:55.996 06:30:51 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.255 Malloc0 00:05:56.255 06:30:51 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.823 Malloc1 00:05:56.823 06:30:52 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.823 06:30:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.823 06:30:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.823 06:30:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.823 06:30:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.823 06:30:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.823 06:30:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.823 06:30:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.823 06:30:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.823 06:30:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.823 06:30:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.823 06:30:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.823 06:30:52 -- bdev/nbd_common.sh@12 -- # local i 00:05:56.823 06:30:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.823 06:30:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.823 06:30:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.082 /dev/nbd0 00:05:57.082 06:30:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.082 06:30:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.082 06:30:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:57.082 06:30:52 -- common/autotest_common.sh@867 -- # local i 00:05:57.082 06:30:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:57.082 06:30:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:57.082 06:30:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:57.082 06:30:52 -- common/autotest_common.sh@871 -- # break 00:05:57.082 06:30:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:57.082 06:30:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:57.082 06:30:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.082 1+0 records in 00:05:57.082 1+0 records out 00:05:57.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322371 s, 12.7 MB/s 00:05:57.082 06:30:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.082 06:30:52 -- common/autotest_common.sh@884 -- # size=4096 00:05:57.082 06:30:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.082 06:30:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:57.082 06:30:52 -- common/autotest_common.sh@887 -- # return 0 00:05:57.082 06:30:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.082 06:30:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.082 06:30:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.341 /dev/nbd1 00:05:57.341 06:30:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.341 06:30:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.341 06:30:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:57.341 06:30:52 -- common/autotest_common.sh@867 -- # local i 00:05:57.341 06:30:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:57.341 06:30:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:57.341 06:30:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:57.341 06:30:52 -- common/autotest_common.sh@871 -- # break 00:05:57.341 06:30:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:57.341 06:30:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:57.341 06:30:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.341 1+0 records in 00:05:57.341 1+0 records out 00:05:57.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335318 s, 12.2 MB/s 00:05:57.341 06:30:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.341 06:30:52 -- common/autotest_common.sh@884 -- # size=4096 00:05:57.341 06:30:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.341 06:30:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:57.341 06:30:52 -- common/autotest_common.sh@887 -- # return 0 00:05:57.341 06:30:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.341 06:30:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.341 06:30:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.341 06:30:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.341 06:30:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.600 { 00:05:57.600 "nbd_device": "/dev/nbd0", 00:05:57.600 "bdev_name": "Malloc0" 00:05:57.600 }, 00:05:57.600 { 00:05:57.600 "nbd_device": "/dev/nbd1", 00:05:57.600 "bdev_name": "Malloc1" 00:05:57.600 } 00:05:57.600 ]' 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.600 { 00:05:57.600 "nbd_device": "/dev/nbd0", 00:05:57.600 "bdev_name": "Malloc0" 00:05:57.600 }, 00:05:57.600 { 00:05:57.600 "nbd_device": "/dev/nbd1", 00:05:57.600 "bdev_name": "Malloc1" 00:05:57.600 } 00:05:57.600 ]' 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.600 /dev/nbd1' 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.600 /dev/nbd1' 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.600 256+0 records in 00:05:57.600 256+0 records out 00:05:57.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00820248 s, 128 MB/s 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.600 06:30:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.600 256+0 records in 00:05:57.600 256+0 records out 00:05:57.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245663 s, 42.7 MB/s 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.600 256+0 records in 00:05:57.600 256+0 records out 00:05:57.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232085 s, 45.2 MB/s 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@51 -- # local i 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.600 06:30:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.166 06:30:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.166 06:30:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.166 06:30:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.166 06:30:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.166 06:30:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.166 06:30:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.166 06:30:53 -- bdev/nbd_common.sh@41 -- # break 00:05:58.166 06:30:53 -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.166 06:30:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.166 06:30:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.425 06:30:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.425 06:30:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.425 06:30:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.425 06:30:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.425 06:30:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.425 06:30:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.425 06:30:53 -- bdev/nbd_common.sh@41 -- # break 00:05:58.425 06:30:53 -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.425 06:30:53 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.425 06:30:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.425 06:30:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.683 06:30:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.683 06:30:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.683 06:30:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.683 06:30:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.683 06:30:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.683 06:30:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.683 06:30:53 -- bdev/nbd_common.sh@65 -- # true 00:05:58.683 06:30:54 -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.683 06:30:54 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.683 06:30:54 -- bdev/nbd_common.sh@104 -- # count=0 00:05:58.683 06:30:54 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:58.683 06:30:54 -- bdev/nbd_common.sh@109 -- # return 0 00:05:58.683 06:30:54 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.941 06:30:54 -- event/event.sh@35 -- # sleep 3 00:05:58.941 [2024-12-05 06:30:54.371749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.941 [2024-12-05 06:30:54.403471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.941 [2024-12-05 06:30:54.403483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.199 [2024-12-05 06:30:54.432679] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.199 [2024-12-05 06:30:54.432775] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.481 spdk_app_start Round 1 00:06:02.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.481 06:30:57 -- event/event.sh@23 -- # for i in {0..2} 00:06:02.481 06:30:57 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:02.481 06:30:57 -- event/event.sh@25 -- # waitforlisten 66708 /var/tmp/spdk-nbd.sock 00:06:02.481 06:30:57 -- common/autotest_common.sh@829 -- # '[' -z 66708 ']' 00:06:02.481 06:30:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.481 06:30:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.481 06:30:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.481 06:30:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.481 06:30:57 -- common/autotest_common.sh@10 -- # set +x 00:06:02.481 06:30:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.481 06:30:57 -- common/autotest_common.sh@862 -- # return 0 00:06:02.481 06:30:57 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.481 Malloc0 00:06:02.481 06:30:57 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.738 Malloc1 00:06:02.738 06:30:58 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.738 06:30:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.738 06:30:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.738 06:30:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.738 06:30:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.738 06:30:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.738 06:30:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.738 06:30:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.738 06:30:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.738 06:30:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.738 06:30:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.738 06:30:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.738 06:30:58 -- bdev/nbd_common.sh@12 -- # local i 00:06:02.738 06:30:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.738 06:30:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.738 06:30:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.996 /dev/nbd0 00:06:02.996 06:30:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.996 06:30:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.996 06:30:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:02.996 06:30:58 -- common/autotest_common.sh@867 -- # local i 00:06:02.996 06:30:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:02.996 06:30:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:02.996 06:30:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:02.996 06:30:58 -- common/autotest_common.sh@871 -- # break 00:06:02.996 06:30:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:02.996 06:30:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:02.996 06:30:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.996 1+0 records in 00:06:02.996 1+0 records out 00:06:02.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309164 s, 13.2 MB/s 00:06:02.996 06:30:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.996 06:30:58 -- common/autotest_common.sh@884 -- # size=4096 00:06:02.996 06:30:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.996 06:30:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:02.996 06:30:58 -- common/autotest_common.sh@887 -- # return 0 00:06:02.996 06:30:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.996 06:30:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.996 06:30:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.253 /dev/nbd1 00:06:03.253 06:30:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.253 06:30:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.253 06:30:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:03.253 06:30:58 -- common/autotest_common.sh@867 -- # local i 00:06:03.253 06:30:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:03.253 06:30:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:03.253 06:30:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:03.253 06:30:58 -- common/autotest_common.sh@871 -- # break 00:06:03.253 06:30:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:03.253 06:30:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:03.253 06:30:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.253 1+0 records in 00:06:03.253 1+0 records out 00:06:03.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269102 s, 15.2 MB/s 00:06:03.253 06:30:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.253 06:30:58 -- common/autotest_common.sh@884 -- # size=4096 00:06:03.253 06:30:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.253 06:30:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:03.253 06:30:58 -- common/autotest_common.sh@887 -- # return 0 00:06:03.253 06:30:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.253 06:30:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.253 06:30:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.253 06:30:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.253 06:30:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.510 { 00:06:03.510 "nbd_device": "/dev/nbd0", 00:06:03.510 "bdev_name": "Malloc0" 00:06:03.510 }, 00:06:03.510 { 00:06:03.510 "nbd_device": "/dev/nbd1", 00:06:03.510 "bdev_name": "Malloc1" 00:06:03.510 } 00:06:03.510 ]' 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.510 { 00:06:03.510 "nbd_device": "/dev/nbd0", 00:06:03.510 "bdev_name": "Malloc0" 00:06:03.510 }, 00:06:03.510 { 00:06:03.510 "nbd_device": "/dev/nbd1", 00:06:03.510 "bdev_name": "Malloc1" 00:06:03.510 } 00:06:03.510 ]' 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.510 /dev/nbd1' 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.510 /dev/nbd1' 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.510 06:30:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.510 256+0 records in 00:06:03.510 256+0 records out 00:06:03.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00700612 s, 150 MB/s 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.511 256+0 records in 00:06:03.511 256+0 records out 00:06:03.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233462 s, 44.9 MB/s 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.511 256+0 records in 00:06:03.511 256+0 records out 00:06:03.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279382 s, 37.5 MB/s 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@51 -- # local i 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.511 06:30:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.075 06:30:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.075 06:30:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.075 06:30:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.075 06:30:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.075 06:30:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.075 06:30:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.075 06:30:59 -- bdev/nbd_common.sh@41 -- # break 00:06:04.075 06:30:59 -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.075 06:30:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.075 06:30:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.333 06:30:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.333 06:30:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.333 06:30:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.333 06:30:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.333 06:30:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.333 06:30:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.333 06:30:59 -- bdev/nbd_common.sh@41 -- # break 00:06:04.333 06:30:59 -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.333 06:30:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.333 06:30:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.333 06:30:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.591 06:30:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.591 06:30:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.591 06:30:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.591 06:30:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.591 06:30:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.591 06:30:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.591 06:30:59 -- bdev/nbd_common.sh@65 -- # true 00:06:04.591 06:30:59 -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.591 06:30:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.591 06:30:59 -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.591 06:30:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.591 06:30:59 -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.591 06:30:59 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.850 06:31:00 -- event/event.sh@35 -- # sleep 3 00:06:04.850 [2024-12-05 06:31:00.216605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.850 [2024-12-05 06:31:00.248497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.850 [2024-12-05 06:31:00.248510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.850 [2024-12-05 06:31:00.280411] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.850 [2024-12-05 06:31:00.280514] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.177 spdk_app_start Round 2 00:06:08.177 06:31:03 -- event/event.sh@23 -- # for i in {0..2} 00:06:08.177 06:31:03 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:08.177 06:31:03 -- event/event.sh@25 -- # waitforlisten 66708 /var/tmp/spdk-nbd.sock 00:06:08.177 06:31:03 -- common/autotest_common.sh@829 -- # '[' -z 66708 ']' 00:06:08.177 06:31:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.177 06:31:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.177 06:31:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.177 06:31:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.177 06:31:03 -- common/autotest_common.sh@10 -- # set +x 00:06:08.177 06:31:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.177 06:31:03 -- common/autotest_common.sh@862 -- # return 0 00:06:08.177 06:31:03 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.436 Malloc0 00:06:08.436 06:31:03 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.436 Malloc1 00:06:08.695 06:31:03 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.695 06:31:03 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.695 06:31:03 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.695 06:31:03 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.695 06:31:03 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.695 06:31:03 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.695 06:31:03 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.695 06:31:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.695 06:31:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.695 06:31:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.695 06:31:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.695 06:31:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.695 06:31:03 -- bdev/nbd_common.sh@12 -- # local i 00:06:08.695 06:31:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.695 06:31:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.695 06:31:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.695 /dev/nbd0 00:06:08.695 06:31:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.695 06:31:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.695 06:31:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:08.695 06:31:04 -- common/autotest_common.sh@867 -- # local i 00:06:08.695 06:31:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:08.695 06:31:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:08.695 06:31:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:08.695 06:31:04 -- common/autotest_common.sh@871 -- # break 00:06:08.695 06:31:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:08.695 06:31:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:08.695 06:31:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.695 1+0 records in 00:06:08.695 1+0 records out 00:06:08.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165876 s, 24.7 MB/s 00:06:08.695 06:31:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.695 06:31:04 -- common/autotest_common.sh@884 -- # size=4096 00:06:08.695 06:31:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.695 06:31:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:08.695 06:31:04 -- common/autotest_common.sh@887 -- # return 0 00:06:08.695 06:31:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.695 06:31:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.695 06:31:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.953 /dev/nbd1 00:06:09.213 06:31:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.213 06:31:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.213 06:31:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:09.213 06:31:04 -- common/autotest_common.sh@867 -- # local i 00:06:09.213 06:31:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:09.213 06:31:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:09.213 06:31:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:09.213 06:31:04 -- common/autotest_common.sh@871 -- # break 00:06:09.213 06:31:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:09.213 06:31:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:09.213 06:31:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.213 1+0 records in 00:06:09.213 1+0 records out 00:06:09.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319623 s, 12.8 MB/s 00:06:09.213 06:31:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.213 06:31:04 -- common/autotest_common.sh@884 -- # size=4096 00:06:09.213 06:31:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.213 06:31:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:09.213 06:31:04 -- common/autotest_common.sh@887 -- # return 0 00:06:09.213 06:31:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.213 06:31:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.213 06:31:04 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.213 06:31:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.213 06:31:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.473 { 00:06:09.473 "nbd_device": "/dev/nbd0", 00:06:09.473 "bdev_name": "Malloc0" 00:06:09.473 }, 00:06:09.473 { 00:06:09.473 "nbd_device": "/dev/nbd1", 00:06:09.473 "bdev_name": "Malloc1" 00:06:09.473 } 00:06:09.473 ]' 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.473 { 00:06:09.473 "nbd_device": "/dev/nbd0", 00:06:09.473 "bdev_name": "Malloc0" 00:06:09.473 }, 00:06:09.473 { 00:06:09.473 "nbd_device": "/dev/nbd1", 00:06:09.473 "bdev_name": "Malloc1" 00:06:09.473 } 00:06:09.473 ]' 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.473 /dev/nbd1' 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.473 /dev/nbd1' 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.473 256+0 records in 00:06:09.473 256+0 records out 00:06:09.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507159 s, 207 MB/s 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.473 256+0 records in 00:06:09.473 256+0 records out 00:06:09.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213304 s, 49.2 MB/s 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.473 256+0 records in 00:06:09.473 256+0 records out 00:06:09.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269346 s, 38.9 MB/s 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@51 -- # local i 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.473 06:31:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.733 06:31:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.733 06:31:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.733 06:31:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.733 06:31:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.733 06:31:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.733 06:31:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.733 06:31:05 -- bdev/nbd_common.sh@41 -- # break 00:06:09.733 06:31:05 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.733 06:31:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.733 06:31:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.991 06:31:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.991 06:31:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.991 06:31:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.991 06:31:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.991 06:31:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.991 06:31:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.991 06:31:05 -- bdev/nbd_common.sh@41 -- # break 00:06:09.991 06:31:05 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.991 06:31:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.991 06:31:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.991 06:31:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.249 06:31:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.249 06:31:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.249 06:31:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.249 06:31:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.249 06:31:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.249 06:31:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.249 06:31:05 -- bdev/nbd_common.sh@65 -- # true 00:06:10.249 06:31:05 -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.249 06:31:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.249 06:31:05 -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.249 06:31:05 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.249 06:31:05 -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.249 06:31:05 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.509 06:31:05 -- event/event.sh@35 -- # sleep 3 00:06:10.767 [2024-12-05 06:31:06.029479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.767 [2024-12-05 06:31:06.059183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.767 [2024-12-05 06:31:06.059193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.767 [2024-12-05 06:31:06.087289] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.767 [2024-12-05 06:31:06.087377] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.055 06:31:08 -- event/event.sh@38 -- # waitforlisten 66708 /var/tmp/spdk-nbd.sock 00:06:14.055 06:31:08 -- common/autotest_common.sh@829 -- # '[' -z 66708 ']' 00:06:14.055 06:31:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.055 06:31:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.055 06:31:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.055 06:31:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.055 06:31:08 -- common/autotest_common.sh@10 -- # set +x 00:06:14.055 06:31:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.055 06:31:09 -- common/autotest_common.sh@862 -- # return 0 00:06:14.055 06:31:09 -- event/event.sh@39 -- # killprocess 66708 00:06:14.055 06:31:09 -- common/autotest_common.sh@936 -- # '[' -z 66708 ']' 00:06:14.055 06:31:09 -- common/autotest_common.sh@940 -- # kill -0 66708 00:06:14.055 06:31:09 -- common/autotest_common.sh@941 -- # uname 00:06:14.055 06:31:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.055 06:31:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66708 00:06:14.055 killing process with pid 66708 00:06:14.055 06:31:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.055 06:31:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.055 06:31:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66708' 00:06:14.055 06:31:09 -- common/autotest_common.sh@955 -- # kill 66708 00:06:14.055 06:31:09 -- common/autotest_common.sh@960 -- # wait 66708 00:06:14.055 spdk_app_start is called in Round 0. 00:06:14.055 Shutdown signal received, stop current app iteration 00:06:14.055 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:14.055 spdk_app_start is called in Round 1. 00:06:14.055 Shutdown signal received, stop current app iteration 00:06:14.055 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:14.055 spdk_app_start is called in Round 2. 00:06:14.055 Shutdown signal received, stop current app iteration 00:06:14.055 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:14.055 spdk_app_start is called in Round 3. 00:06:14.055 Shutdown signal received, stop current app iteration 00:06:14.055 06:31:09 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:14.055 06:31:09 -- event/event.sh@42 -- # return 0 00:06:14.055 00:06:14.055 real 0m18.269s 00:06:14.055 user 0m41.798s 00:06:14.055 sys 0m2.501s 00:06:14.055 06:31:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.055 06:31:09 -- common/autotest_common.sh@10 -- # set +x 00:06:14.055 ************************************ 00:06:14.055 END TEST app_repeat 00:06:14.055 ************************************ 00:06:14.055 06:31:09 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:14.055 06:31:09 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:14.055 06:31:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.055 06:31:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.055 06:31:09 -- common/autotest_common.sh@10 -- # set +x 00:06:14.055 ************************************ 00:06:14.055 START TEST cpu_locks 00:06:14.055 ************************************ 00:06:14.055 06:31:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:14.055 * Looking for test storage... 00:06:14.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:14.055 06:31:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:14.055 06:31:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:14.055 06:31:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:14.313 06:31:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:14.313 06:31:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:14.313 06:31:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:14.313 06:31:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:14.313 06:31:09 -- scripts/common.sh@335 -- # IFS=.-: 00:06:14.313 06:31:09 -- scripts/common.sh@335 -- # read -ra ver1 00:06:14.313 06:31:09 -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.313 06:31:09 -- scripts/common.sh@336 -- # read -ra ver2 00:06:14.313 06:31:09 -- scripts/common.sh@337 -- # local 'op=<' 00:06:14.313 06:31:09 -- scripts/common.sh@339 -- # ver1_l=2 00:06:14.313 06:31:09 -- scripts/common.sh@340 -- # ver2_l=1 00:06:14.313 06:31:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:14.313 06:31:09 -- scripts/common.sh@343 -- # case "$op" in 00:06:14.313 06:31:09 -- scripts/common.sh@344 -- # : 1 00:06:14.313 06:31:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:14.313 06:31:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.313 06:31:09 -- scripts/common.sh@364 -- # decimal 1 00:06:14.314 06:31:09 -- scripts/common.sh@352 -- # local d=1 00:06:14.314 06:31:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.314 06:31:09 -- scripts/common.sh@354 -- # echo 1 00:06:14.314 06:31:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:14.314 06:31:09 -- scripts/common.sh@365 -- # decimal 2 00:06:14.314 06:31:09 -- scripts/common.sh@352 -- # local d=2 00:06:14.314 06:31:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.314 06:31:09 -- scripts/common.sh@354 -- # echo 2 00:06:14.314 06:31:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:14.314 06:31:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:14.314 06:31:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:14.314 06:31:09 -- scripts/common.sh@367 -- # return 0 00:06:14.314 06:31:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.314 06:31:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:14.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.314 --rc genhtml_branch_coverage=1 00:06:14.314 --rc genhtml_function_coverage=1 00:06:14.314 --rc genhtml_legend=1 00:06:14.314 --rc geninfo_all_blocks=1 00:06:14.314 --rc geninfo_unexecuted_blocks=1 00:06:14.314 00:06:14.314 ' 00:06:14.314 06:31:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:14.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.314 --rc genhtml_branch_coverage=1 00:06:14.314 --rc genhtml_function_coverage=1 00:06:14.314 --rc genhtml_legend=1 00:06:14.314 --rc geninfo_all_blocks=1 00:06:14.314 --rc geninfo_unexecuted_blocks=1 00:06:14.314 00:06:14.314 ' 00:06:14.314 06:31:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:14.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.314 --rc genhtml_branch_coverage=1 00:06:14.314 --rc genhtml_function_coverage=1 00:06:14.314 --rc genhtml_legend=1 00:06:14.314 --rc geninfo_all_blocks=1 00:06:14.314 --rc geninfo_unexecuted_blocks=1 00:06:14.314 00:06:14.314 ' 00:06:14.314 06:31:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:14.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.314 --rc genhtml_branch_coverage=1 00:06:14.314 --rc genhtml_function_coverage=1 00:06:14.314 --rc genhtml_legend=1 00:06:14.314 --rc geninfo_all_blocks=1 00:06:14.314 --rc geninfo_unexecuted_blocks=1 00:06:14.314 00:06:14.314 ' 00:06:14.314 06:31:09 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:14.314 06:31:09 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:14.314 06:31:09 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:14.314 06:31:09 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:14.314 06:31:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.314 06:31:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.314 06:31:09 -- common/autotest_common.sh@10 -- # set +x 00:06:14.314 ************************************ 00:06:14.314 START TEST default_locks 00:06:14.314 ************************************ 00:06:14.314 06:31:09 -- common/autotest_common.sh@1114 -- # default_locks 00:06:14.314 06:31:09 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=67140 00:06:14.314 06:31:09 -- event/cpu_locks.sh@47 -- # waitforlisten 67140 00:06:14.314 06:31:09 -- common/autotest_common.sh@829 -- # '[' -z 67140 ']' 00:06:14.314 06:31:09 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.314 06:31:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.314 06:31:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.314 06:31:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.314 06:31:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.314 06:31:09 -- common/autotest_common.sh@10 -- # set +x 00:06:14.314 [2024-12-05 06:31:09.684896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:14.314 [2024-12-05 06:31:09.685020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67140 ] 00:06:14.572 [2024-12-05 06:31:09.823488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.572 [2024-12-05 06:31:09.855091] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:14.572 [2024-12-05 06:31:09.855297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.507 06:31:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.507 06:31:10 -- common/autotest_common.sh@862 -- # return 0 00:06:15.507 06:31:10 -- event/cpu_locks.sh@49 -- # locks_exist 67140 00:06:15.507 06:31:10 -- event/cpu_locks.sh@22 -- # lslocks -p 67140 00:06:15.507 06:31:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.507 06:31:10 -- event/cpu_locks.sh@50 -- # killprocess 67140 00:06:15.507 06:31:10 -- common/autotest_common.sh@936 -- # '[' -z 67140 ']' 00:06:15.507 06:31:10 -- common/autotest_common.sh@940 -- # kill -0 67140 00:06:15.508 06:31:10 -- common/autotest_common.sh@941 -- # uname 00:06:15.508 06:31:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:15.508 06:31:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67140 00:06:15.508 06:31:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:15.508 06:31:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:15.508 06:31:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67140' 00:06:15.508 killing process with pid 67140 00:06:15.508 06:31:10 -- common/autotest_common.sh@955 -- # kill 67140 00:06:15.508 06:31:10 -- common/autotest_common.sh@960 -- # wait 67140 00:06:15.766 06:31:11 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 67140 00:06:15.766 06:31:11 -- common/autotest_common.sh@650 -- # local es=0 00:06:15.766 06:31:11 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67140 00:06:15.766 06:31:11 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:15.766 06:31:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.766 06:31:11 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:15.766 06:31:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.766 06:31:11 -- common/autotest_common.sh@653 -- # waitforlisten 67140 00:06:15.766 06:31:11 -- common/autotest_common.sh@829 -- # '[' -z 67140 ']' 00:06:15.766 06:31:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.767 06:31:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.767 06:31:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.767 06:31:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.767 06:31:11 -- common/autotest_common.sh@10 -- # set +x 00:06:15.767 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67140) - No such process 00:06:15.767 ERROR: process (pid: 67140) is no longer running 00:06:15.767 06:31:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.767 06:31:11 -- common/autotest_common.sh@862 -- # return 1 00:06:15.767 06:31:11 -- common/autotest_common.sh@653 -- # es=1 00:06:15.767 06:31:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.767 06:31:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.767 06:31:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.767 06:31:11 -- event/cpu_locks.sh@54 -- # no_locks 00:06:15.767 06:31:11 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.767 06:31:11 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.767 06:31:11 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.767 00:06:15.767 real 0m1.548s 00:06:15.767 user 0m1.762s 00:06:15.767 sys 0m0.378s 00:06:15.767 06:31:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.767 ************************************ 00:06:15.767 END TEST default_locks 00:06:15.767 ************************************ 00:06:15.767 06:31:11 -- common/autotest_common.sh@10 -- # set +x 00:06:15.767 06:31:11 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:15.767 06:31:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.767 06:31:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.767 06:31:11 -- common/autotest_common.sh@10 -- # set +x 00:06:15.767 ************************************ 00:06:15.767 START TEST default_locks_via_rpc 00:06:15.767 ************************************ 00:06:15.767 06:31:11 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:15.767 06:31:11 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=67187 00:06:15.767 06:31:11 -- event/cpu_locks.sh@63 -- # waitforlisten 67187 00:06:15.767 06:31:11 -- common/autotest_common.sh@829 -- # '[' -z 67187 ']' 00:06:15.767 06:31:11 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.767 06:31:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.767 06:31:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.767 06:31:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.767 06:31:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.767 06:31:11 -- common/autotest_common.sh@10 -- # set +x 00:06:16.026 [2024-12-05 06:31:11.281419] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:16.026 [2024-12-05 06:31:11.281524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67187 ] 00:06:16.026 [2024-12-05 06:31:11.420237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.026 [2024-12-05 06:31:11.455368] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.026 [2024-12-05 06:31:11.455530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.963 06:31:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.963 06:31:12 -- common/autotest_common.sh@862 -- # return 0 00:06:16.963 06:31:12 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:16.963 06:31:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.963 06:31:12 -- common/autotest_common.sh@10 -- # set +x 00:06:16.963 06:31:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.963 06:31:12 -- event/cpu_locks.sh@67 -- # no_locks 00:06:16.963 06:31:12 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.963 06:31:12 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.963 06:31:12 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.963 06:31:12 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.963 06:31:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.963 06:31:12 -- common/autotest_common.sh@10 -- # set +x 00:06:16.963 06:31:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.963 06:31:12 -- event/cpu_locks.sh@71 -- # locks_exist 67187 00:06:16.963 06:31:12 -- event/cpu_locks.sh@22 -- # lslocks -p 67187 00:06:16.963 06:31:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.532 06:31:12 -- event/cpu_locks.sh@73 -- # killprocess 67187 00:06:17.532 06:31:12 -- common/autotest_common.sh@936 -- # '[' -z 67187 ']' 00:06:17.532 06:31:12 -- common/autotest_common.sh@940 -- # kill -0 67187 00:06:17.532 06:31:12 -- common/autotest_common.sh@941 -- # uname 00:06:17.532 06:31:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:17.532 06:31:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67187 00:06:17.532 06:31:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:17.532 06:31:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:17.532 killing process with pid 67187 00:06:17.532 06:31:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67187' 00:06:17.532 06:31:12 -- common/autotest_common.sh@955 -- # kill 67187 00:06:17.532 06:31:12 -- common/autotest_common.sh@960 -- # wait 67187 00:06:17.792 00:06:17.792 real 0m1.789s 00:06:17.792 user 0m2.102s 00:06:17.792 sys 0m0.462s 00:06:17.792 06:31:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.792 06:31:13 -- common/autotest_common.sh@10 -- # set +x 00:06:17.792 ************************************ 00:06:17.792 END TEST default_locks_via_rpc 00:06:17.792 ************************************ 00:06:17.792 06:31:13 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:17.792 06:31:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.792 06:31:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.792 06:31:13 -- common/autotest_common.sh@10 -- # set +x 00:06:17.792 ************************************ 00:06:17.792 START TEST non_locking_app_on_locked_coremask 00:06:17.792 ************************************ 00:06:17.792 06:31:13 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:17.792 06:31:13 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67232 00:06:17.792 06:31:13 -- event/cpu_locks.sh@81 -- # waitforlisten 67232 /var/tmp/spdk.sock 00:06:17.792 06:31:13 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.792 06:31:13 -- common/autotest_common.sh@829 -- # '[' -z 67232 ']' 00:06:17.792 06:31:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.792 06:31:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.792 06:31:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.792 06:31:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.792 06:31:13 -- common/autotest_common.sh@10 -- # set +x 00:06:17.792 [2024-12-05 06:31:13.122679] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:17.792 [2024-12-05 06:31:13.122783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67232 ] 00:06:18.051 [2024-12-05 06:31:13.260936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.051 [2024-12-05 06:31:13.292829] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.051 [2024-12-05 06:31:13.293006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.988 06:31:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.988 06:31:14 -- common/autotest_common.sh@862 -- # return 0 00:06:18.988 06:31:14 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67254 00:06:18.988 06:31:14 -- event/cpu_locks.sh@85 -- # waitforlisten 67254 /var/tmp/spdk2.sock 00:06:18.988 06:31:14 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:18.988 06:31:14 -- common/autotest_common.sh@829 -- # '[' -z 67254 ']' 00:06:18.988 06:31:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.988 06:31:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.988 06:31:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.988 06:31:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.988 06:31:14 -- common/autotest_common.sh@10 -- # set +x 00:06:18.988 [2024-12-05 06:31:14.179354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:18.988 [2024-12-05 06:31:14.179461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67254 ] 00:06:18.988 [2024-12-05 06:31:14.318247] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.989 [2024-12-05 06:31:14.318296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.989 [2024-12-05 06:31:14.379281] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.989 [2024-12-05 06:31:14.383558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.925 06:31:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.925 06:31:15 -- common/autotest_common.sh@862 -- # return 0 00:06:19.925 06:31:15 -- event/cpu_locks.sh@87 -- # locks_exist 67232 00:06:19.925 06:31:15 -- event/cpu_locks.sh@22 -- # lslocks -p 67232 00:06:19.925 06:31:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.490 06:31:15 -- event/cpu_locks.sh@89 -- # killprocess 67232 00:06:20.490 06:31:15 -- common/autotest_common.sh@936 -- # '[' -z 67232 ']' 00:06:20.490 06:31:15 -- common/autotest_common.sh@940 -- # kill -0 67232 00:06:20.490 06:31:15 -- common/autotest_common.sh@941 -- # uname 00:06:20.490 06:31:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:20.490 06:31:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67232 00:06:20.490 06:31:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:20.490 06:31:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:20.490 killing process with pid 67232 00:06:20.490 06:31:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67232' 00:06:20.490 06:31:15 -- common/autotest_common.sh@955 -- # kill 67232 00:06:20.490 06:31:15 -- common/autotest_common.sh@960 -- # wait 67232 00:06:20.748 06:31:16 -- event/cpu_locks.sh@90 -- # killprocess 67254 00:06:20.748 06:31:16 -- common/autotest_common.sh@936 -- # '[' -z 67254 ']' 00:06:20.748 06:31:16 -- common/autotest_common.sh@940 -- # kill -0 67254 00:06:20.748 06:31:16 -- common/autotest_common.sh@941 -- # uname 00:06:20.748 06:31:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:20.748 06:31:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67254 00:06:21.007 06:31:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:21.007 killing process with pid 67254 00:06:21.007 06:31:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:21.007 06:31:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67254' 00:06:21.007 06:31:16 -- common/autotest_common.sh@955 -- # kill 67254 00:06:21.007 06:31:16 -- common/autotest_common.sh@960 -- # wait 67254 00:06:21.007 00:06:21.007 real 0m3.366s 00:06:21.007 user 0m4.041s 00:06:21.007 sys 0m0.766s 00:06:21.007 06:31:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.007 06:31:16 -- common/autotest_common.sh@10 -- # set +x 00:06:21.007 ************************************ 00:06:21.007 END TEST non_locking_app_on_locked_coremask 00:06:21.007 ************************************ 00:06:21.267 06:31:16 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:21.267 06:31:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:21.267 06:31:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.267 06:31:16 -- common/autotest_common.sh@10 -- # set +x 00:06:21.267 ************************************ 00:06:21.267 START TEST locking_app_on_unlocked_coremask 00:06:21.267 ************************************ 00:06:21.267 06:31:16 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:21.267 06:31:16 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67310 00:06:21.267 06:31:16 -- event/cpu_locks.sh@99 -- # waitforlisten 67310 /var/tmp/spdk.sock 00:06:21.267 06:31:16 -- common/autotest_common.sh@829 -- # '[' -z 67310 ']' 00:06:21.267 06:31:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.267 06:31:16 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:21.267 06:31:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.267 06:31:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.267 06:31:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.267 06:31:16 -- common/autotest_common.sh@10 -- # set +x 00:06:21.267 [2024-12-05 06:31:16.533536] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:21.267 [2024-12-05 06:31:16.533625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67310 ] 00:06:21.267 [2024-12-05 06:31:16.667481] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.267 [2024-12-05 06:31:16.667531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.267 [2024-12-05 06:31:16.697917] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.267 [2024-12-05 06:31:16.698073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.202 06:31:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.202 06:31:17 -- common/autotest_common.sh@862 -- # return 0 00:06:22.202 06:31:17 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67326 00:06:22.202 06:31:17 -- event/cpu_locks.sh@103 -- # waitforlisten 67326 /var/tmp/spdk2.sock 00:06:22.202 06:31:17 -- common/autotest_common.sh@829 -- # '[' -z 67326 ']' 00:06:22.202 06:31:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.202 06:31:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.202 06:31:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.203 06:31:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.203 06:31:17 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:22.203 06:31:17 -- common/autotest_common.sh@10 -- # set +x 00:06:22.203 [2024-12-05 06:31:17.579132] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:22.203 [2024-12-05 06:31:17.579228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67326 ] 00:06:22.464 [2024-12-05 06:31:17.716683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.464 [2024-12-05 06:31:17.774646] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.464 [2024-12-05 06:31:17.774819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.108 06:31:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.108 06:31:18 -- common/autotest_common.sh@862 -- # return 0 00:06:23.108 06:31:18 -- event/cpu_locks.sh@105 -- # locks_exist 67326 00:06:23.108 06:31:18 -- event/cpu_locks.sh@22 -- # lslocks -p 67326 00:06:23.108 06:31:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.046 06:31:19 -- event/cpu_locks.sh@107 -- # killprocess 67310 00:06:24.046 06:31:19 -- common/autotest_common.sh@936 -- # '[' -z 67310 ']' 00:06:24.046 06:31:19 -- common/autotest_common.sh@940 -- # kill -0 67310 00:06:24.046 06:31:19 -- common/autotest_common.sh@941 -- # uname 00:06:24.046 06:31:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.046 06:31:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67310 00:06:24.046 06:31:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:24.046 killing process with pid 67310 00:06:24.046 06:31:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:24.046 06:31:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67310' 00:06:24.046 06:31:19 -- common/autotest_common.sh@955 -- # kill 67310 00:06:24.046 06:31:19 -- common/autotest_common.sh@960 -- # wait 67310 00:06:24.305 06:31:19 -- event/cpu_locks.sh@108 -- # killprocess 67326 00:06:24.305 06:31:19 -- common/autotest_common.sh@936 -- # '[' -z 67326 ']' 00:06:24.305 06:31:19 -- common/autotest_common.sh@940 -- # kill -0 67326 00:06:24.305 06:31:19 -- common/autotest_common.sh@941 -- # uname 00:06:24.305 06:31:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.305 06:31:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67326 00:06:24.565 06:31:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:24.565 killing process with pid 67326 00:06:24.565 06:31:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:24.565 06:31:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67326' 00:06:24.565 06:31:19 -- common/autotest_common.sh@955 -- # kill 67326 00:06:24.565 06:31:19 -- common/autotest_common.sh@960 -- # wait 67326 00:06:24.565 00:06:24.565 real 0m3.509s 00:06:24.565 user 0m4.169s 00:06:24.565 sys 0m0.882s 00:06:24.565 ************************************ 00:06:24.565 END TEST locking_app_on_unlocked_coremask 00:06:24.565 ************************************ 00:06:24.565 06:31:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.565 06:31:19 -- common/autotest_common.sh@10 -- # set +x 00:06:24.824 06:31:20 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:24.824 06:31:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.824 06:31:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.824 06:31:20 -- common/autotest_common.sh@10 -- # set +x 00:06:24.824 ************************************ 00:06:24.824 START TEST locking_app_on_locked_coremask 00:06:24.824 ************************************ 00:06:24.824 06:31:20 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:24.824 06:31:20 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67387 00:06:24.824 06:31:20 -- event/cpu_locks.sh@116 -- # waitforlisten 67387 /var/tmp/spdk.sock 00:06:24.824 06:31:20 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.824 06:31:20 -- common/autotest_common.sh@829 -- # '[' -z 67387 ']' 00:06:24.824 06:31:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.824 06:31:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.824 06:31:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.824 06:31:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.824 06:31:20 -- common/autotest_common.sh@10 -- # set +x 00:06:24.824 [2024-12-05 06:31:20.105279] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:24.824 [2024-12-05 06:31:20.105395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67387 ] 00:06:24.824 [2024-12-05 06:31:20.244095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.824 [2024-12-05 06:31:20.283553] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.824 [2024-12-05 06:31:20.283764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.762 06:31:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.762 06:31:21 -- common/autotest_common.sh@862 -- # return 0 00:06:25.762 06:31:21 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67403 00:06:25.762 06:31:21 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:25.762 06:31:21 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67403 /var/tmp/spdk2.sock 00:06:25.762 06:31:21 -- common/autotest_common.sh@650 -- # local es=0 00:06:25.762 06:31:21 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67403 /var/tmp/spdk2.sock 00:06:25.762 06:31:21 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:25.762 06:31:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.762 06:31:21 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:25.762 06:31:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.762 06:31:21 -- common/autotest_common.sh@653 -- # waitforlisten 67403 /var/tmp/spdk2.sock 00:06:25.762 06:31:21 -- common/autotest_common.sh@829 -- # '[' -z 67403 ']' 00:06:25.762 06:31:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.762 06:31:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.762 06:31:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.762 06:31:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.762 06:31:21 -- common/autotest_common.sh@10 -- # set +x 00:06:25.762 [2024-12-05 06:31:21.188466] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:25.762 [2024-12-05 06:31:21.188590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67403 ] 00:06:26.022 [2024-12-05 06:31:21.327061] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67387 has claimed it. 00:06:26.022 [2024-12-05 06:31:21.327142] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:26.591 ERROR: process (pid: 67403) is no longer running 00:06:26.591 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67403) - No such process 00:06:26.591 06:31:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.591 06:31:21 -- common/autotest_common.sh@862 -- # return 1 00:06:26.591 06:31:21 -- common/autotest_common.sh@653 -- # es=1 00:06:26.591 06:31:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.591 06:31:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:26.591 06:31:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.591 06:31:21 -- event/cpu_locks.sh@122 -- # locks_exist 67387 00:06:26.591 06:31:21 -- event/cpu_locks.sh@22 -- # lslocks -p 67387 00:06:26.591 06:31:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.850 06:31:22 -- event/cpu_locks.sh@124 -- # killprocess 67387 00:06:26.850 06:31:22 -- common/autotest_common.sh@936 -- # '[' -z 67387 ']' 00:06:26.850 06:31:22 -- common/autotest_common.sh@940 -- # kill -0 67387 00:06:26.850 06:31:22 -- common/autotest_common.sh@941 -- # uname 00:06:26.850 06:31:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:26.850 06:31:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67387 00:06:27.109 06:31:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:27.109 killing process with pid 67387 00:06:27.109 06:31:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:27.109 06:31:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67387' 00:06:27.109 06:31:22 -- common/autotest_common.sh@955 -- # kill 67387 00:06:27.109 06:31:22 -- common/autotest_common.sh@960 -- # wait 67387 00:06:27.109 00:06:27.109 real 0m2.492s 00:06:27.109 user 0m3.020s 00:06:27.109 sys 0m0.550s 00:06:27.109 06:31:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.109 06:31:22 -- common/autotest_common.sh@10 -- # set +x 00:06:27.109 ************************************ 00:06:27.109 END TEST locking_app_on_locked_coremask 00:06:27.109 ************************************ 00:06:27.369 06:31:22 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:27.369 06:31:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.369 06:31:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.369 06:31:22 -- common/autotest_common.sh@10 -- # set +x 00:06:27.369 ************************************ 00:06:27.369 START TEST locking_overlapped_coremask 00:06:27.369 ************************************ 00:06:27.369 06:31:22 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:27.369 06:31:22 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67449 00:06:27.369 06:31:22 -- event/cpu_locks.sh@133 -- # waitforlisten 67449 /var/tmp/spdk.sock 00:06:27.369 06:31:22 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:27.369 06:31:22 -- common/autotest_common.sh@829 -- # '[' -z 67449 ']' 00:06:27.369 06:31:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.369 06:31:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.369 06:31:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.369 06:31:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.369 06:31:22 -- common/autotest_common.sh@10 -- # set +x 00:06:27.369 [2024-12-05 06:31:22.646767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:27.369 [2024-12-05 06:31:22.646878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67449 ] 00:06:27.369 [2024-12-05 06:31:22.783386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.369 [2024-12-05 06:31:22.815527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.369 [2024-12-05 06:31:22.815827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.369 [2024-12-05 06:31:22.816001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.369 [2024-12-05 06:31:22.816006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.307 06:31:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.307 06:31:23 -- common/autotest_common.sh@862 -- # return 0 00:06:28.307 06:31:23 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67467 00:06:28.307 06:31:23 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:28.307 06:31:23 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67467 /var/tmp/spdk2.sock 00:06:28.307 06:31:23 -- common/autotest_common.sh@650 -- # local es=0 00:06:28.307 06:31:23 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67467 /var/tmp/spdk2.sock 00:06:28.307 06:31:23 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:28.307 06:31:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.307 06:31:23 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:28.307 06:31:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.307 06:31:23 -- common/autotest_common.sh@653 -- # waitforlisten 67467 /var/tmp/spdk2.sock 00:06:28.307 06:31:23 -- common/autotest_common.sh@829 -- # '[' -z 67467 ']' 00:06:28.307 06:31:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.307 06:31:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.307 06:31:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.307 06:31:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.307 06:31:23 -- common/autotest_common.sh@10 -- # set +x 00:06:28.307 [2024-12-05 06:31:23.676300] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:28.307 [2024-12-05 06:31:23.676431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67467 ] 00:06:28.566 [2024-12-05 06:31:23.819388] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67449 has claimed it. 00:06:28.566 [2024-12-05 06:31:23.819452] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.132 ERROR: process (pid: 67467) is no longer running 00:06:29.132 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67467) - No such process 00:06:29.132 06:31:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.132 06:31:24 -- common/autotest_common.sh@862 -- # return 1 00:06:29.132 06:31:24 -- common/autotest_common.sh@653 -- # es=1 00:06:29.132 06:31:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.132 06:31:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.132 06:31:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.132 06:31:24 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:29.132 06:31:24 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:29.132 06:31:24 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:29.132 06:31:24 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:29.132 06:31:24 -- event/cpu_locks.sh@141 -- # killprocess 67449 00:06:29.132 06:31:24 -- common/autotest_common.sh@936 -- # '[' -z 67449 ']' 00:06:29.132 06:31:24 -- common/autotest_common.sh@940 -- # kill -0 67449 00:06:29.132 06:31:24 -- common/autotest_common.sh@941 -- # uname 00:06:29.132 06:31:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.132 06:31:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67449 00:06:29.132 06:31:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.132 killing process with pid 67449 00:06:29.132 06:31:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.132 06:31:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67449' 00:06:29.132 06:31:24 -- common/autotest_common.sh@955 -- # kill 67449 00:06:29.132 06:31:24 -- common/autotest_common.sh@960 -- # wait 67449 00:06:29.391 00:06:29.391 real 0m2.066s 00:06:29.391 user 0m6.075s 00:06:29.391 sys 0m0.303s 00:06:29.391 ************************************ 00:06:29.391 END TEST locking_overlapped_coremask 00:06:29.391 06:31:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.391 06:31:24 -- common/autotest_common.sh@10 -- # set +x 00:06:29.391 ************************************ 00:06:29.391 06:31:24 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:29.391 06:31:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.391 06:31:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.391 06:31:24 -- common/autotest_common.sh@10 -- # set +x 00:06:29.391 ************************************ 00:06:29.391 START TEST locking_overlapped_coremask_via_rpc 00:06:29.391 ************************************ 00:06:29.391 06:31:24 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:29.391 06:31:24 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67507 00:06:29.391 06:31:24 -- event/cpu_locks.sh@149 -- # waitforlisten 67507 /var/tmp/spdk.sock 00:06:29.391 06:31:24 -- common/autotest_common.sh@829 -- # '[' -z 67507 ']' 00:06:29.391 06:31:24 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:29.391 06:31:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.391 06:31:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.391 06:31:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.391 06:31:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.391 06:31:24 -- common/autotest_common.sh@10 -- # set +x 00:06:29.391 [2024-12-05 06:31:24.756653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:29.391 [2024-12-05 06:31:24.756756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67507 ] 00:06:29.649 [2024-12-05 06:31:24.888555] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.649 [2024-12-05 06:31:24.888604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.649 [2024-12-05 06:31:24.922051] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.649 [2024-12-05 06:31:24.922378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.649 [2024-12-05 06:31:24.922677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.649 [2024-12-05 06:31:24.922683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.584 06:31:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.584 06:31:25 -- common/autotest_common.sh@862 -- # return 0 00:06:30.584 06:31:25 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:30.584 06:31:25 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67525 00:06:30.584 06:31:25 -- event/cpu_locks.sh@153 -- # waitforlisten 67525 /var/tmp/spdk2.sock 00:06:30.584 06:31:25 -- common/autotest_common.sh@829 -- # '[' -z 67525 ']' 00:06:30.584 06:31:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.584 06:31:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.584 06:31:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.584 06:31:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.584 06:31:25 -- common/autotest_common.sh@10 -- # set +x 00:06:30.584 [2024-12-05 06:31:25.774754] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:30.584 [2024-12-05 06:31:25.774830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67525 ] 00:06:30.584 [2024-12-05 06:31:25.910257] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.584 [2024-12-05 06:31:25.914362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.584 [2024-12-05 06:31:25.978353] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.584 [2024-12-05 06:31:25.979411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.584 [2024-12-05 06:31:25.982469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:30.584 [2024-12-05 06:31:25.982470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.521 06:31:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.521 06:31:26 -- common/autotest_common.sh@862 -- # return 0 00:06:31.521 06:31:26 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.521 06:31:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.521 06:31:26 -- common/autotest_common.sh@10 -- # set +x 00:06:31.521 06:31:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.521 06:31:26 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.521 06:31:26 -- common/autotest_common.sh@650 -- # local es=0 00:06:31.521 06:31:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.521 06:31:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:31.521 06:31:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.521 06:31:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:31.521 06:31:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.521 06:31:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.521 06:31:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.521 06:31:26 -- common/autotest_common.sh@10 -- # set +x 00:06:31.521 [2024-12-05 06:31:26.819511] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67507 has claimed it. 00:06:31.521 request: 00:06:31.521 { 00:06:31.521 "method": "framework_enable_cpumask_locks", 00:06:31.521 "req_id": 1 00:06:31.521 } 00:06:31.521 Got JSON-RPC error response 00:06:31.521 response: 00:06:31.521 { 00:06:31.521 "code": -32603, 00:06:31.521 "message": "Failed to claim CPU core: 2" 00:06:31.521 } 00:06:31.521 06:31:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:31.521 06:31:26 -- common/autotest_common.sh@653 -- # es=1 00:06:31.521 06:31:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:31.521 06:31:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:31.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.521 06:31:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:31.521 06:31:26 -- event/cpu_locks.sh@158 -- # waitforlisten 67507 /var/tmp/spdk.sock 00:06:31.521 06:31:26 -- common/autotest_common.sh@829 -- # '[' -z 67507 ']' 00:06:31.521 06:31:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.521 06:31:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.521 06:31:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.521 06:31:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.521 06:31:26 -- common/autotest_common.sh@10 -- # set +x 00:06:31.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.780 06:31:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.780 06:31:27 -- common/autotest_common.sh@862 -- # return 0 00:06:31.780 06:31:27 -- event/cpu_locks.sh@159 -- # waitforlisten 67525 /var/tmp/spdk2.sock 00:06:31.780 06:31:27 -- common/autotest_common.sh@829 -- # '[' -z 67525 ']' 00:06:31.780 06:31:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.780 06:31:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.780 06:31:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.780 06:31:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.780 06:31:27 -- common/autotest_common.sh@10 -- # set +x 00:06:32.039 ************************************ 00:06:32.039 END TEST locking_overlapped_coremask_via_rpc 00:06:32.040 ************************************ 00:06:32.040 06:31:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.040 06:31:27 -- common/autotest_common.sh@862 -- # return 0 00:06:32.040 06:31:27 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:32.040 06:31:27 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.040 06:31:27 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.040 06:31:27 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.040 00:06:32.040 real 0m2.631s 00:06:32.040 user 0m1.364s 00:06:32.040 sys 0m0.196s 00:06:32.040 06:31:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.040 06:31:27 -- common/autotest_common.sh@10 -- # set +x 00:06:32.040 06:31:27 -- event/cpu_locks.sh@174 -- # cleanup 00:06:32.040 06:31:27 -- event/cpu_locks.sh@15 -- # [[ -z 67507 ]] 00:06:32.040 06:31:27 -- event/cpu_locks.sh@15 -- # killprocess 67507 00:06:32.040 06:31:27 -- common/autotest_common.sh@936 -- # '[' -z 67507 ']' 00:06:32.040 06:31:27 -- common/autotest_common.sh@940 -- # kill -0 67507 00:06:32.040 06:31:27 -- common/autotest_common.sh@941 -- # uname 00:06:32.040 06:31:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:32.040 06:31:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67507 00:06:32.040 killing process with pid 67507 00:06:32.040 06:31:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:32.040 06:31:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:32.040 06:31:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67507' 00:06:32.040 06:31:27 -- common/autotest_common.sh@955 -- # kill 67507 00:06:32.040 06:31:27 -- common/autotest_common.sh@960 -- # wait 67507 00:06:32.300 06:31:27 -- event/cpu_locks.sh@16 -- # [[ -z 67525 ]] 00:06:32.300 06:31:27 -- event/cpu_locks.sh@16 -- # killprocess 67525 00:06:32.300 06:31:27 -- common/autotest_common.sh@936 -- # '[' -z 67525 ']' 00:06:32.300 06:31:27 -- common/autotest_common.sh@940 -- # kill -0 67525 00:06:32.300 06:31:27 -- common/autotest_common.sh@941 -- # uname 00:06:32.300 06:31:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:32.300 06:31:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67525 00:06:32.300 killing process with pid 67525 00:06:32.300 06:31:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:32.300 06:31:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:32.300 06:31:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67525' 00:06:32.300 06:31:27 -- common/autotest_common.sh@955 -- # kill 67525 00:06:32.300 06:31:27 -- common/autotest_common.sh@960 -- # wait 67525 00:06:32.559 06:31:27 -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.559 06:31:27 -- event/cpu_locks.sh@1 -- # cleanup 00:06:32.559 06:31:27 -- event/cpu_locks.sh@15 -- # [[ -z 67507 ]] 00:06:32.559 06:31:27 -- event/cpu_locks.sh@15 -- # killprocess 67507 00:06:32.559 06:31:27 -- common/autotest_common.sh@936 -- # '[' -z 67507 ']' 00:06:32.559 06:31:27 -- common/autotest_common.sh@940 -- # kill -0 67507 00:06:32.559 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67507) - No such process 00:06:32.559 Process with pid 67507 is not found 00:06:32.559 06:31:27 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67507 is not found' 00:06:32.559 06:31:27 -- event/cpu_locks.sh@16 -- # [[ -z 67525 ]] 00:06:32.559 06:31:27 -- event/cpu_locks.sh@16 -- # killprocess 67525 00:06:32.559 06:31:27 -- common/autotest_common.sh@936 -- # '[' -z 67525 ']' 00:06:32.559 06:31:27 -- common/autotest_common.sh@940 -- # kill -0 67525 00:06:32.559 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67525) - No such process 00:06:32.559 Process with pid 67525 is not found 00:06:32.559 06:31:27 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67525 is not found' 00:06:32.559 06:31:27 -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.559 ************************************ 00:06:32.559 END TEST cpu_locks 00:06:32.559 ************************************ 00:06:32.559 00:06:32.559 real 0m18.487s 00:06:32.559 user 0m34.597s 00:06:32.559 sys 0m4.185s 00:06:32.559 06:31:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.559 06:31:27 -- common/autotest_common.sh@10 -- # set +x 00:06:32.559 00:06:32.559 real 0m42.739s 00:06:32.559 user 1m25.053s 00:06:32.559 sys 0m7.371s 00:06:32.559 06:31:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.559 06:31:27 -- common/autotest_common.sh@10 -- # set +x 00:06:32.559 ************************************ 00:06:32.559 END TEST event 00:06:32.559 ************************************ 00:06:32.559 06:31:27 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:32.559 06:31:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.559 06:31:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.559 06:31:27 -- common/autotest_common.sh@10 -- # set +x 00:06:32.559 ************************************ 00:06:32.559 START TEST thread 00:06:32.559 ************************************ 00:06:32.559 06:31:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:32.818 * Looking for test storage... 00:06:32.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:32.818 06:31:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:32.818 06:31:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:32.818 06:31:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:32.818 06:31:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:32.818 06:31:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:32.818 06:31:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:32.818 06:31:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:32.818 06:31:28 -- scripts/common.sh@335 -- # IFS=.-: 00:06:32.818 06:31:28 -- scripts/common.sh@335 -- # read -ra ver1 00:06:32.818 06:31:28 -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.818 06:31:28 -- scripts/common.sh@336 -- # read -ra ver2 00:06:32.818 06:31:28 -- scripts/common.sh@337 -- # local 'op=<' 00:06:32.818 06:31:28 -- scripts/common.sh@339 -- # ver1_l=2 00:06:32.818 06:31:28 -- scripts/common.sh@340 -- # ver2_l=1 00:06:32.818 06:31:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:32.818 06:31:28 -- scripts/common.sh@343 -- # case "$op" in 00:06:32.818 06:31:28 -- scripts/common.sh@344 -- # : 1 00:06:32.818 06:31:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:32.818 06:31:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.818 06:31:28 -- scripts/common.sh@364 -- # decimal 1 00:06:32.818 06:31:28 -- scripts/common.sh@352 -- # local d=1 00:06:32.818 06:31:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.818 06:31:28 -- scripts/common.sh@354 -- # echo 1 00:06:32.818 06:31:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:32.818 06:31:28 -- scripts/common.sh@365 -- # decimal 2 00:06:32.818 06:31:28 -- scripts/common.sh@352 -- # local d=2 00:06:32.818 06:31:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.818 06:31:28 -- scripts/common.sh@354 -- # echo 2 00:06:32.818 06:31:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:32.818 06:31:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:32.818 06:31:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:32.818 06:31:28 -- scripts/common.sh@367 -- # return 0 00:06:32.818 06:31:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.818 06:31:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:32.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.818 --rc genhtml_branch_coverage=1 00:06:32.818 --rc genhtml_function_coverage=1 00:06:32.818 --rc genhtml_legend=1 00:06:32.818 --rc geninfo_all_blocks=1 00:06:32.818 --rc geninfo_unexecuted_blocks=1 00:06:32.818 00:06:32.818 ' 00:06:32.818 06:31:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:32.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.818 --rc genhtml_branch_coverage=1 00:06:32.818 --rc genhtml_function_coverage=1 00:06:32.818 --rc genhtml_legend=1 00:06:32.818 --rc geninfo_all_blocks=1 00:06:32.818 --rc geninfo_unexecuted_blocks=1 00:06:32.818 00:06:32.818 ' 00:06:32.818 06:31:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:32.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.818 --rc genhtml_branch_coverage=1 00:06:32.818 --rc genhtml_function_coverage=1 00:06:32.818 --rc genhtml_legend=1 00:06:32.818 --rc geninfo_all_blocks=1 00:06:32.818 --rc geninfo_unexecuted_blocks=1 00:06:32.818 00:06:32.818 ' 00:06:32.818 06:31:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:32.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.818 --rc genhtml_branch_coverage=1 00:06:32.818 --rc genhtml_function_coverage=1 00:06:32.818 --rc genhtml_legend=1 00:06:32.818 --rc geninfo_all_blocks=1 00:06:32.818 --rc geninfo_unexecuted_blocks=1 00:06:32.818 00:06:32.818 ' 00:06:32.818 06:31:28 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.818 06:31:28 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:32.818 06:31:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.818 06:31:28 -- common/autotest_common.sh@10 -- # set +x 00:06:32.818 ************************************ 00:06:32.818 START TEST thread_poller_perf 00:06:32.818 ************************************ 00:06:32.818 06:31:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.818 [2024-12-05 06:31:28.206502] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:32.819 [2024-12-05 06:31:28.206760] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67649 ] 00:06:33.078 [2024-12-05 06:31:28.344724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.078 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:33.078 [2024-12-05 06:31:28.376730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.012 [2024-12-05T06:31:29.478Z] ====================================== 00:06:34.012 [2024-12-05T06:31:29.478Z] busy:2209677826 (cyc) 00:06:34.012 [2024-12-05T06:31:29.478Z] total_run_count: 348000 00:06:34.012 [2024-12-05T06:31:29.478Z] tsc_hz: 2200000000 (cyc) 00:06:34.012 [2024-12-05T06:31:29.478Z] ====================================== 00:06:34.012 [2024-12-05T06:31:29.478Z] poller_cost: 6349 (cyc), 2885 (nsec) 00:06:34.012 ************************************ 00:06:34.012 END TEST thread_poller_perf 00:06:34.012 ************************************ 00:06:34.012 00:06:34.012 real 0m1.265s 00:06:34.012 user 0m1.116s 00:06:34.012 sys 0m0.041s 00:06:34.012 06:31:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.012 06:31:29 -- common/autotest_common.sh@10 -- # set +x 00:06:34.271 06:31:29 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.271 06:31:29 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:34.271 06:31:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.271 06:31:29 -- common/autotest_common.sh@10 -- # set +x 00:06:34.271 ************************************ 00:06:34.271 START TEST thread_poller_perf 00:06:34.271 ************************************ 00:06:34.271 06:31:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.271 [2024-12-05 06:31:29.521891] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:34.271 [2024-12-05 06:31:29.521983] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67690 ] 00:06:34.271 [2024-12-05 06:31:29.658557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.271 [2024-12-05 06:31:29.687380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.271 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:35.649 [2024-12-05T06:31:31.115Z] ====================================== 00:06:35.649 [2024-12-05T06:31:31.115Z] busy:2202244170 (cyc) 00:06:35.649 [2024-12-05T06:31:31.115Z] total_run_count: 4898000 00:06:35.649 [2024-12-05T06:31:31.115Z] tsc_hz: 2200000000 (cyc) 00:06:35.649 [2024-12-05T06:31:31.115Z] ====================================== 00:06:35.649 [2024-12-05T06:31:31.115Z] poller_cost: 449 (cyc), 204 (nsec) 00:06:35.649 00:06:35.649 real 0m1.231s 00:06:35.649 user 0m1.091s 00:06:35.649 sys 0m0.034s 00:06:35.649 ************************************ 00:06:35.649 END TEST thread_poller_perf 00:06:35.649 ************************************ 00:06:35.649 06:31:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.649 06:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:35.649 06:31:30 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:35.649 ************************************ 00:06:35.649 END TEST thread 00:06:35.649 ************************************ 00:06:35.649 00:06:35.649 real 0m2.781s 00:06:35.649 user 0m2.349s 00:06:35.649 sys 0m0.215s 00:06:35.649 06:31:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.649 06:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:35.649 06:31:30 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:35.649 06:31:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.649 06:31:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.649 06:31:30 -- common/autotest_common.sh@10 -- # set +x 00:06:35.649 ************************************ 00:06:35.649 START TEST accel 00:06:35.649 ************************************ 00:06:35.649 06:31:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:35.649 * Looking for test storage... 00:06:35.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:35.649 06:31:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:35.649 06:31:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:35.649 06:31:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:35.649 06:31:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:35.649 06:31:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:35.649 06:31:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:35.649 06:31:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:35.649 06:31:30 -- scripts/common.sh@335 -- # IFS=.-: 00:06:35.649 06:31:30 -- scripts/common.sh@335 -- # read -ra ver1 00:06:35.649 06:31:30 -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.649 06:31:30 -- scripts/common.sh@336 -- # read -ra ver2 00:06:35.649 06:31:30 -- scripts/common.sh@337 -- # local 'op=<' 00:06:35.649 06:31:30 -- scripts/common.sh@339 -- # ver1_l=2 00:06:35.649 06:31:30 -- scripts/common.sh@340 -- # ver2_l=1 00:06:35.649 06:31:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:35.649 06:31:30 -- scripts/common.sh@343 -- # case "$op" in 00:06:35.649 06:31:30 -- scripts/common.sh@344 -- # : 1 00:06:35.649 06:31:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:35.649 06:31:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.649 06:31:30 -- scripts/common.sh@364 -- # decimal 1 00:06:35.649 06:31:30 -- scripts/common.sh@352 -- # local d=1 00:06:35.649 06:31:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.650 06:31:30 -- scripts/common.sh@354 -- # echo 1 00:06:35.650 06:31:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:35.650 06:31:31 -- scripts/common.sh@365 -- # decimal 2 00:06:35.650 06:31:31 -- scripts/common.sh@352 -- # local d=2 00:06:35.650 06:31:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.650 06:31:31 -- scripts/common.sh@354 -- # echo 2 00:06:35.650 06:31:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:35.650 06:31:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:35.650 06:31:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:35.650 06:31:31 -- scripts/common.sh@367 -- # return 0 00:06:35.650 06:31:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.650 06:31:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:35.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.650 --rc genhtml_branch_coverage=1 00:06:35.650 --rc genhtml_function_coverage=1 00:06:35.650 --rc genhtml_legend=1 00:06:35.650 --rc geninfo_all_blocks=1 00:06:35.650 --rc geninfo_unexecuted_blocks=1 00:06:35.650 00:06:35.650 ' 00:06:35.650 06:31:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:35.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.650 --rc genhtml_branch_coverage=1 00:06:35.650 --rc genhtml_function_coverage=1 00:06:35.650 --rc genhtml_legend=1 00:06:35.650 --rc geninfo_all_blocks=1 00:06:35.650 --rc geninfo_unexecuted_blocks=1 00:06:35.650 00:06:35.650 ' 00:06:35.650 06:31:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:35.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.650 --rc genhtml_branch_coverage=1 00:06:35.650 --rc genhtml_function_coverage=1 00:06:35.650 --rc genhtml_legend=1 00:06:35.650 --rc geninfo_all_blocks=1 00:06:35.650 --rc geninfo_unexecuted_blocks=1 00:06:35.650 00:06:35.650 ' 00:06:35.650 06:31:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:35.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.650 --rc genhtml_branch_coverage=1 00:06:35.650 --rc genhtml_function_coverage=1 00:06:35.650 --rc genhtml_legend=1 00:06:35.650 --rc geninfo_all_blocks=1 00:06:35.650 --rc geninfo_unexecuted_blocks=1 00:06:35.650 00:06:35.650 ' 00:06:35.650 06:31:31 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:35.650 06:31:31 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:35.650 06:31:31 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:35.650 06:31:31 -- accel/accel.sh@59 -- # spdk_tgt_pid=67766 00:06:35.650 06:31:31 -- accel/accel.sh@60 -- # waitforlisten 67766 00:06:35.650 06:31:31 -- common/autotest_common.sh@829 -- # '[' -z 67766 ']' 00:06:35.650 06:31:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.650 06:31:31 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:35.650 06:31:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.650 06:31:31 -- accel/accel.sh@58 -- # build_accel_config 00:06:35.650 06:31:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.650 06:31:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.650 06:31:31 -- common/autotest_common.sh@10 -- # set +x 00:06:35.650 06:31:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.650 06:31:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.650 06:31:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.650 06:31:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.650 06:31:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.650 06:31:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.650 06:31:31 -- accel/accel.sh@42 -- # jq -r . 00:06:35.650 [2024-12-05 06:31:31.068012] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:35.650 [2024-12-05 06:31:31.068334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67766 ] 00:06:35.909 [2024-12-05 06:31:31.203366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.909 [2024-12-05 06:31:31.233936] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.909 [2024-12-05 06:31:31.234093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.846 06:31:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.846 06:31:32 -- common/autotest_common.sh@862 -- # return 0 00:06:36.846 06:31:32 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:36.846 06:31:32 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:36.846 06:31:32 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:36.846 06:31:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.846 06:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:36.846 06:31:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.846 06:31:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # IFS== 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.846 06:31:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.846 06:31:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # IFS== 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.846 06:31:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.846 06:31:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # IFS== 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.846 06:31:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.846 06:31:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # IFS== 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.846 06:31:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.846 06:31:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # IFS== 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.846 06:31:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.846 06:31:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # IFS== 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.846 06:31:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.846 06:31:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # IFS== 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.846 06:31:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.846 06:31:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # IFS== 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.846 06:31:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.846 06:31:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # IFS== 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.846 06:31:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.846 06:31:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # IFS== 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.846 06:31:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.846 06:31:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # IFS== 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.846 06:31:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.846 06:31:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # IFS== 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.846 06:31:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.846 06:31:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # IFS== 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.846 06:31:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.846 06:31:32 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # IFS== 00:06:36.846 06:31:32 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.846 06:31:32 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.846 06:31:32 -- accel/accel.sh@67 -- # killprocess 67766 00:06:36.846 06:31:32 -- common/autotest_common.sh@936 -- # '[' -z 67766 ']' 00:06:36.846 06:31:32 -- common/autotest_common.sh@940 -- # kill -0 67766 00:06:36.846 06:31:32 -- common/autotest_common.sh@941 -- # uname 00:06:36.846 06:31:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:36.846 06:31:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67766 00:06:36.846 06:31:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:36.846 06:31:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:36.846 killing process with pid 67766 00:06:36.846 06:31:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67766' 00:06:36.846 06:31:32 -- common/autotest_common.sh@955 -- # kill 67766 00:06:36.846 06:31:32 -- common/autotest_common.sh@960 -- # wait 67766 00:06:37.106 06:31:32 -- accel/accel.sh@68 -- # trap - ERR 00:06:37.106 06:31:32 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:37.106 06:31:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:37.106 06:31:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.106 06:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:37.106 06:31:32 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:37.106 06:31:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:37.106 06:31:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.106 06:31:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.106 06:31:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.106 06:31:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.106 06:31:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.106 06:31:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.106 06:31:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.106 06:31:32 -- accel/accel.sh@42 -- # jq -r . 00:06:37.106 06:31:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.106 06:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:37.106 06:31:32 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:37.106 06:31:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:37.106 06:31:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.106 06:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:37.106 ************************************ 00:06:37.106 START TEST accel_missing_filename 00:06:37.106 ************************************ 00:06:37.106 06:31:32 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:37.106 06:31:32 -- common/autotest_common.sh@650 -- # local es=0 00:06:37.106 06:31:32 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:37.106 06:31:32 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:37.106 06:31:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.106 06:31:32 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:37.106 06:31:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.106 06:31:32 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:37.106 06:31:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:37.106 06:31:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.106 06:31:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.106 06:31:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.106 06:31:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.106 06:31:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.106 06:31:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.106 06:31:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.106 06:31:32 -- accel/accel.sh@42 -- # jq -r . 00:06:37.106 [2024-12-05 06:31:32.428064] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:37.106 [2024-12-05 06:31:32.428153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67812 ] 00:06:37.106 [2024-12-05 06:31:32.563024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.365 [2024-12-05 06:31:32.593491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.365 [2024-12-05 06:31:32.620393] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.365 [2024-12-05 06:31:32.656863] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:37.365 A filename is required. 00:06:37.365 06:31:32 -- common/autotest_common.sh@653 -- # es=234 00:06:37.365 06:31:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.365 06:31:32 -- common/autotest_common.sh@662 -- # es=106 00:06:37.365 06:31:32 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:37.365 06:31:32 -- common/autotest_common.sh@670 -- # es=1 00:06:37.365 06:31:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.365 00:06:37.365 real 0m0.316s 00:06:37.365 user 0m0.203s 00:06:37.365 sys 0m0.064s 00:06:37.365 06:31:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.365 06:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:37.365 ************************************ 00:06:37.365 END TEST accel_missing_filename 00:06:37.365 ************************************ 00:06:37.365 06:31:32 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.365 06:31:32 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:37.365 06:31:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.365 06:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:37.365 ************************************ 00:06:37.365 START TEST accel_compress_verify 00:06:37.365 ************************************ 00:06:37.365 06:31:32 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.365 06:31:32 -- common/autotest_common.sh@650 -- # local es=0 00:06:37.366 06:31:32 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.366 06:31:32 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:37.366 06:31:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.366 06:31:32 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:37.366 06:31:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.366 06:31:32 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.366 06:31:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:37.366 06:31:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.366 06:31:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.366 06:31:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.366 06:31:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.366 06:31:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.366 06:31:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.366 06:31:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.366 06:31:32 -- accel/accel.sh@42 -- # jq -r . 00:06:37.366 [2024-12-05 06:31:32.796019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:37.366 [2024-12-05 06:31:32.796107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67842 ] 00:06:37.625 [2024-12-05 06:31:32.926140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.625 [2024-12-05 06:31:32.960642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.625 [2024-12-05 06:31:32.988139] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.625 [2024-12-05 06:31:33.024401] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:37.625 00:06:37.625 Compression does not support the verify option, aborting. 00:06:37.885 ************************************ 00:06:37.885 END TEST accel_compress_verify 00:06:37.885 ************************************ 00:06:37.885 06:31:33 -- common/autotest_common.sh@653 -- # es=161 00:06:37.885 06:31:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.885 06:31:33 -- common/autotest_common.sh@662 -- # es=33 00:06:37.885 06:31:33 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:37.885 06:31:33 -- common/autotest_common.sh@670 -- # es=1 00:06:37.885 06:31:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.885 00:06:37.885 real 0m0.315s 00:06:37.885 user 0m0.194s 00:06:37.885 sys 0m0.067s 00:06:37.885 06:31:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.885 06:31:33 -- common/autotest_common.sh@10 -- # set +x 00:06:37.885 06:31:33 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:37.885 06:31:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:37.885 06:31:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.885 06:31:33 -- common/autotest_common.sh@10 -- # set +x 00:06:37.885 ************************************ 00:06:37.885 START TEST accel_wrong_workload 00:06:37.885 ************************************ 00:06:37.885 06:31:33 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:37.885 06:31:33 -- common/autotest_common.sh@650 -- # local es=0 00:06:37.885 06:31:33 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:37.885 06:31:33 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:37.885 06:31:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.885 06:31:33 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:37.885 06:31:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.885 06:31:33 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:37.885 06:31:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:37.885 06:31:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.885 06:31:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.885 06:31:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.885 06:31:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.885 06:31:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.885 06:31:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.885 06:31:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.885 06:31:33 -- accel/accel.sh@42 -- # jq -r . 00:06:37.885 Unsupported workload type: foobar 00:06:37.885 [2024-12-05 06:31:33.166250] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:37.885 accel_perf options: 00:06:37.885 [-h help message] 00:06:37.885 [-q queue depth per core] 00:06:37.885 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:37.885 [-T number of threads per core 00:06:37.885 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:37.885 [-t time in seconds] 00:06:37.885 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:37.885 [ dif_verify, , dif_generate, dif_generate_copy 00:06:37.885 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:37.885 [-l for compress/decompress workloads, name of uncompressed input file 00:06:37.885 [-S for crc32c workload, use this seed value (default 0) 00:06:37.885 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:37.885 [-f for fill workload, use this BYTE value (default 255) 00:06:37.885 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:37.885 [-y verify result if this switch is on] 00:06:37.885 [-a tasks to allocate per core (default: same value as -q)] 00:06:37.885 Can be used to spread operations across a wider range of memory. 00:06:37.885 06:31:33 -- common/autotest_common.sh@653 -- # es=1 00:06:37.885 06:31:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.885 06:31:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.885 06:31:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.885 00:06:37.885 real 0m0.030s 00:06:37.885 user 0m0.017s 00:06:37.885 sys 0m0.013s 00:06:37.885 06:31:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.885 06:31:33 -- common/autotest_common.sh@10 -- # set +x 00:06:37.885 ************************************ 00:06:37.885 END TEST accel_wrong_workload 00:06:37.885 ************************************ 00:06:37.885 06:31:33 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:37.885 06:31:33 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:37.885 06:31:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.885 06:31:33 -- common/autotest_common.sh@10 -- # set +x 00:06:37.885 ************************************ 00:06:37.885 START TEST accel_negative_buffers 00:06:37.885 ************************************ 00:06:37.885 06:31:33 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:37.885 06:31:33 -- common/autotest_common.sh@650 -- # local es=0 00:06:37.885 06:31:33 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:37.885 06:31:33 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:37.885 06:31:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.885 06:31:33 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:37.885 06:31:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.885 06:31:33 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:37.885 06:31:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:37.885 06:31:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.885 06:31:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.885 06:31:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.885 06:31:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.885 06:31:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.885 06:31:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.885 06:31:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.885 06:31:33 -- accel/accel.sh@42 -- # jq -r . 00:06:37.885 -x option must be non-negative. 00:06:37.885 [2024-12-05 06:31:33.245302] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:37.885 accel_perf options: 00:06:37.885 [-h help message] 00:06:37.885 [-q queue depth per core] 00:06:37.885 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:37.885 [-T number of threads per core 00:06:37.885 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:37.885 [-t time in seconds] 00:06:37.885 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:37.885 [ dif_verify, , dif_generate, dif_generate_copy 00:06:37.885 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:37.885 [-l for compress/decompress workloads, name of uncompressed input file 00:06:37.885 [-S for crc32c workload, use this seed value (default 0) 00:06:37.885 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:37.885 [-f for fill workload, use this BYTE value (default 255) 00:06:37.885 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:37.885 [-y verify result if this switch is on] 00:06:37.885 [-a tasks to allocate per core (default: same value as -q)] 00:06:37.885 Can be used to spread operations across a wider range of memory. 00:06:37.885 06:31:33 -- common/autotest_common.sh@653 -- # es=1 00:06:37.885 06:31:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.885 06:31:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.885 06:31:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.885 00:06:37.885 real 0m0.030s 00:06:37.885 user 0m0.019s 00:06:37.885 sys 0m0.011s 00:06:37.885 06:31:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.885 06:31:33 -- common/autotest_common.sh@10 -- # set +x 00:06:37.885 ************************************ 00:06:37.885 END TEST accel_negative_buffers 00:06:37.885 ************************************ 00:06:37.885 06:31:33 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:37.885 06:31:33 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:37.885 06:31:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.885 06:31:33 -- common/autotest_common.sh@10 -- # set +x 00:06:37.885 ************************************ 00:06:37.885 START TEST accel_crc32c 00:06:37.885 ************************************ 00:06:37.885 06:31:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:37.885 06:31:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.886 06:31:33 -- accel/accel.sh@17 -- # local accel_module 00:06:37.886 06:31:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:37.886 06:31:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:37.886 06:31:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.886 06:31:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.886 06:31:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.886 06:31:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.886 06:31:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.886 06:31:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.886 06:31:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.886 06:31:33 -- accel/accel.sh@42 -- # jq -r . 00:06:37.886 [2024-12-05 06:31:33.323667] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:37.886 [2024-12-05 06:31:33.323751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67895 ] 00:06:38.145 [2024-12-05 06:31:33.460325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.145 [2024-12-05 06:31:33.495299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.517 06:31:34 -- accel/accel.sh@18 -- # out=' 00:06:39.517 SPDK Configuration: 00:06:39.517 Core mask: 0x1 00:06:39.517 00:06:39.517 Accel Perf Configuration: 00:06:39.517 Workload Type: crc32c 00:06:39.517 CRC-32C seed: 32 00:06:39.517 Transfer size: 4096 bytes 00:06:39.517 Vector count 1 00:06:39.517 Module: software 00:06:39.517 Queue depth: 32 00:06:39.517 Allocate depth: 32 00:06:39.517 # threads/core: 1 00:06:39.517 Run time: 1 seconds 00:06:39.517 Verify: Yes 00:06:39.517 00:06:39.517 Running for 1 seconds... 00:06:39.517 00:06:39.517 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.517 ------------------------------------------------------------------------------------ 00:06:39.517 0,0 531904/s 2077 MiB/s 0 0 00:06:39.517 ==================================================================================== 00:06:39.517 Total 531904/s 2077 MiB/s 0 0' 00:06:39.517 06:31:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:39.517 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.517 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.517 06:31:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:39.517 06:31:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.517 06:31:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.517 06:31:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.517 06:31:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.517 06:31:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.517 06:31:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.517 06:31:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.517 06:31:34 -- accel/accel.sh@42 -- # jq -r . 00:06:39.517 [2024-12-05 06:31:34.634106] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:39.517 [2024-12-05 06:31:34.634209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67920 ] 00:06:39.517 [2024-12-05 06:31:34.760783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.517 [2024-12-05 06:31:34.790777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.517 06:31:34 -- accel/accel.sh@21 -- # val= 00:06:39.517 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.517 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.517 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.517 06:31:34 -- accel/accel.sh@21 -- # val= 00:06:39.517 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.517 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.518 06:31:34 -- accel/accel.sh@21 -- # val=0x1 00:06:39.518 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.518 06:31:34 -- accel/accel.sh@21 -- # val= 00:06:39.518 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.518 06:31:34 -- accel/accel.sh@21 -- # val= 00:06:39.518 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.518 06:31:34 -- accel/accel.sh@21 -- # val=crc32c 00:06:39.518 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.518 06:31:34 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.518 06:31:34 -- accel/accel.sh@21 -- # val=32 00:06:39.518 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.518 06:31:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.518 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.518 06:31:34 -- accel/accel.sh@21 -- # val= 00:06:39.518 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.518 06:31:34 -- accel/accel.sh@21 -- # val=software 00:06:39.518 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.518 06:31:34 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.518 06:31:34 -- accel/accel.sh@21 -- # val=32 00:06:39.518 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.518 06:31:34 -- accel/accel.sh@21 -- # val=32 00:06:39.518 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.518 06:31:34 -- accel/accel.sh@21 -- # val=1 00:06:39.518 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.518 06:31:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.518 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.518 06:31:34 -- accel/accel.sh@21 -- # val=Yes 00:06:39.518 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.518 06:31:34 -- accel/accel.sh@21 -- # val= 00:06:39.518 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.518 06:31:34 -- accel/accel.sh@21 -- # val= 00:06:39.518 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.518 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:40.452 06:31:35 -- accel/accel.sh@21 -- # val= 00:06:40.452 06:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.452 06:31:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.452 06:31:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.452 06:31:35 -- accel/accel.sh@21 -- # val= 00:06:40.452 06:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.452 06:31:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.452 06:31:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.452 06:31:35 -- accel/accel.sh@21 -- # val= 00:06:40.452 06:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.452 06:31:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.452 06:31:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.452 06:31:35 -- accel/accel.sh@21 -- # val= 00:06:40.452 06:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.452 06:31:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.452 06:31:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.452 06:31:35 -- accel/accel.sh@21 -- # val= 00:06:40.452 06:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.452 06:31:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.452 06:31:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.452 06:31:35 -- accel/accel.sh@21 -- # val= 00:06:40.452 06:31:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.452 06:31:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.452 06:31:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.452 06:31:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.452 06:31:35 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:40.452 06:31:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.452 00:06:40.452 real 0m2.607s 00:06:40.452 user 0m2.275s 00:06:40.452 sys 0m0.138s 00:06:40.452 06:31:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.452 06:31:35 -- common/autotest_common.sh@10 -- # set +x 00:06:40.452 ************************************ 00:06:40.452 END TEST accel_crc32c 00:06:40.452 ************************************ 00:06:40.711 06:31:35 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:40.711 06:31:35 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:40.711 06:31:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.711 06:31:35 -- common/autotest_common.sh@10 -- # set +x 00:06:40.711 ************************************ 00:06:40.711 START TEST accel_crc32c_C2 00:06:40.711 ************************************ 00:06:40.711 06:31:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:40.711 06:31:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.711 06:31:35 -- accel/accel.sh@17 -- # local accel_module 00:06:40.711 06:31:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:40.711 06:31:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:40.711 06:31:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.711 06:31:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.711 06:31:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.711 06:31:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.711 06:31:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.711 06:31:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.711 06:31:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.711 06:31:35 -- accel/accel.sh@42 -- # jq -r . 00:06:40.711 [2024-12-05 06:31:35.982581] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:40.711 [2024-12-05 06:31:35.982695] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67949 ] 00:06:40.711 [2024-12-05 06:31:36.117950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.711 [2024-12-05 06:31:36.147108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.086 06:31:37 -- accel/accel.sh@18 -- # out=' 00:06:42.086 SPDK Configuration: 00:06:42.086 Core mask: 0x1 00:06:42.086 00:06:42.086 Accel Perf Configuration: 00:06:42.086 Workload Type: crc32c 00:06:42.086 CRC-32C seed: 0 00:06:42.086 Transfer size: 4096 bytes 00:06:42.086 Vector count 2 00:06:42.086 Module: software 00:06:42.086 Queue depth: 32 00:06:42.086 Allocate depth: 32 00:06:42.086 # threads/core: 1 00:06:42.086 Run time: 1 seconds 00:06:42.086 Verify: Yes 00:06:42.086 00:06:42.086 Running for 1 seconds... 00:06:42.086 00:06:42.086 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.086 ------------------------------------------------------------------------------------ 00:06:42.086 0,0 404416/s 3159 MiB/s 0 0 00:06:42.086 ==================================================================================== 00:06:42.086 Total 404416/s 1579 MiB/s 0 0' 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:42.086 06:31:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:42.086 06:31:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.086 06:31:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.086 06:31:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.086 06:31:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.086 06:31:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.086 06:31:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.086 06:31:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.086 06:31:37 -- accel/accel.sh@42 -- # jq -r . 00:06:42.086 [2024-12-05 06:31:37.290849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:42.086 [2024-12-05 06:31:37.290952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67965 ] 00:06:42.086 [2024-12-05 06:31:37.424703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.086 [2024-12-05 06:31:37.453667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val= 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val= 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val=0x1 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val= 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val= 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val=crc32c 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val=0 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val= 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val=software 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val=32 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val=32 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val=1 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val=Yes 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val= 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.086 06:31:37 -- accel/accel.sh@21 -- # val= 00:06:42.086 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.086 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:43.465 06:31:38 -- accel/accel.sh@21 -- # val= 00:06:43.465 06:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.465 06:31:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.466 06:31:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.466 06:31:38 -- accel/accel.sh@21 -- # val= 00:06:43.466 06:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.466 06:31:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.466 06:31:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.466 06:31:38 -- accel/accel.sh@21 -- # val= 00:06:43.466 06:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.466 06:31:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.466 06:31:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.466 06:31:38 -- accel/accel.sh@21 -- # val= 00:06:43.466 06:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.466 06:31:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.466 06:31:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.466 06:31:38 -- accel/accel.sh@21 -- # val= 00:06:43.466 06:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.466 06:31:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.466 06:31:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.466 06:31:38 -- accel/accel.sh@21 -- # val= 00:06:43.466 06:31:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.466 06:31:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.466 06:31:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.466 06:31:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.466 06:31:38 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:43.466 06:31:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.466 00:06:43.466 real 0m2.617s 00:06:43.466 user 0m2.272s 00:06:43.466 sys 0m0.147s 00:06:43.466 06:31:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.466 06:31:38 -- common/autotest_common.sh@10 -- # set +x 00:06:43.466 ************************************ 00:06:43.466 END TEST accel_crc32c_C2 00:06:43.466 ************************************ 00:06:43.466 06:31:38 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:43.466 06:31:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:43.466 06:31:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.466 06:31:38 -- common/autotest_common.sh@10 -- # set +x 00:06:43.466 ************************************ 00:06:43.466 START TEST accel_copy 00:06:43.466 ************************************ 00:06:43.466 06:31:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:43.466 06:31:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.466 06:31:38 -- accel/accel.sh@17 -- # local accel_module 00:06:43.466 06:31:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:43.466 06:31:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:43.466 06:31:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.466 06:31:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.466 06:31:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.466 06:31:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.466 06:31:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.466 06:31:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.466 06:31:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.466 06:31:38 -- accel/accel.sh@42 -- # jq -r . 00:06:43.466 [2024-12-05 06:31:38.645708] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:43.466 [2024-12-05 06:31:38.645795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68000 ] 00:06:43.466 [2024-12-05 06:31:38.778845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.466 [2024-12-05 06:31:38.807911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.845 06:31:39 -- accel/accel.sh@18 -- # out=' 00:06:44.845 SPDK Configuration: 00:06:44.845 Core mask: 0x1 00:06:44.845 00:06:44.845 Accel Perf Configuration: 00:06:44.845 Workload Type: copy 00:06:44.845 Transfer size: 4096 bytes 00:06:44.845 Vector count 1 00:06:44.845 Module: software 00:06:44.845 Queue depth: 32 00:06:44.845 Allocate depth: 32 00:06:44.845 # threads/core: 1 00:06:44.845 Run time: 1 seconds 00:06:44.845 Verify: Yes 00:06:44.845 00:06:44.845 Running for 1 seconds... 00:06:44.845 00:06:44.845 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.845 ------------------------------------------------------------------------------------ 00:06:44.845 0,0 357504/s 1396 MiB/s 0 0 00:06:44.845 ==================================================================================== 00:06:44.845 Total 357504/s 1396 MiB/s 0 0' 00:06:44.845 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.845 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.845 06:31:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:44.845 06:31:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:44.845 06:31:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.845 06:31:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.845 06:31:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.845 06:31:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.845 06:31:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.845 06:31:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.845 06:31:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.845 06:31:39 -- accel/accel.sh@42 -- # jq -r . 00:06:44.845 [2024-12-05 06:31:39.950185] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:44.845 [2024-12-05 06:31:39.950277] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68019 ] 00:06:44.845 [2024-12-05 06:31:40.085583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.845 [2024-12-05 06:31:40.114308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.845 06:31:40 -- accel/accel.sh@21 -- # val= 00:06:44.845 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.845 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.845 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.845 06:31:40 -- accel/accel.sh@21 -- # val= 00:06:44.845 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.845 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.845 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.845 06:31:40 -- accel/accel.sh@21 -- # val=0x1 00:06:44.845 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.845 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.845 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.845 06:31:40 -- accel/accel.sh@21 -- # val= 00:06:44.845 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.845 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.845 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.845 06:31:40 -- accel/accel.sh@21 -- # val= 00:06:44.845 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.845 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.845 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.845 06:31:40 -- accel/accel.sh@21 -- # val=copy 00:06:44.845 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.845 06:31:40 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.846 06:31:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.846 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.846 06:31:40 -- accel/accel.sh@21 -- # val= 00:06:44.846 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.846 06:31:40 -- accel/accel.sh@21 -- # val=software 00:06:44.846 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.846 06:31:40 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.846 06:31:40 -- accel/accel.sh@21 -- # val=32 00:06:44.846 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.846 06:31:40 -- accel/accel.sh@21 -- # val=32 00:06:44.846 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.846 06:31:40 -- accel/accel.sh@21 -- # val=1 00:06:44.846 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.846 06:31:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.846 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.846 06:31:40 -- accel/accel.sh@21 -- # val=Yes 00:06:44.846 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.846 06:31:40 -- accel/accel.sh@21 -- # val= 00:06:44.846 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:44.846 06:31:40 -- accel/accel.sh@21 -- # val= 00:06:44.846 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:44.846 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.794 06:31:41 -- accel/accel.sh@21 -- # val= 00:06:45.794 06:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.794 06:31:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.794 06:31:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.794 06:31:41 -- accel/accel.sh@21 -- # val= 00:06:45.794 06:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.794 06:31:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.794 06:31:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.794 06:31:41 -- accel/accel.sh@21 -- # val= 00:06:45.794 06:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.794 06:31:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.794 06:31:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.794 06:31:41 -- accel/accel.sh@21 -- # val= 00:06:45.794 06:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.794 06:31:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.794 06:31:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.794 06:31:41 -- accel/accel.sh@21 -- # val= 00:06:45.794 06:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.794 06:31:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.794 06:31:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.794 06:31:41 -- accel/accel.sh@21 -- # val= 00:06:45.794 06:31:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.794 06:31:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.794 06:31:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.794 06:31:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:45.794 06:31:41 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:45.794 06:31:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.794 ************************************ 00:06:45.794 END TEST accel_copy 00:06:45.794 ************************************ 00:06:45.794 00:06:45.794 real 0m2.608s 00:06:45.794 user 0m2.280s 00:06:45.794 sys 0m0.128s 00:06:45.794 06:31:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.794 06:31:41 -- common/autotest_common.sh@10 -- # set +x 00:06:46.092 06:31:41 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.092 06:31:41 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:46.092 06:31:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.092 06:31:41 -- common/autotest_common.sh@10 -- # set +x 00:06:46.092 ************************************ 00:06:46.092 START TEST accel_fill 00:06:46.092 ************************************ 00:06:46.092 06:31:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.092 06:31:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.092 06:31:41 -- accel/accel.sh@17 -- # local accel_module 00:06:46.092 06:31:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.092 06:31:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.092 06:31:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.092 06:31:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.092 06:31:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.092 06:31:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.092 06:31:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.092 06:31:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.092 06:31:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.092 06:31:41 -- accel/accel.sh@42 -- # jq -r . 00:06:46.092 [2024-12-05 06:31:41.308584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:46.092 [2024-12-05 06:31:41.309031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68048 ] 00:06:46.092 [2024-12-05 06:31:41.442464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.092 [2024-12-05 06:31:41.472133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.510 06:31:42 -- accel/accel.sh@18 -- # out=' 00:06:47.510 SPDK Configuration: 00:06:47.510 Core mask: 0x1 00:06:47.510 00:06:47.510 Accel Perf Configuration: 00:06:47.510 Workload Type: fill 00:06:47.510 Fill pattern: 0x80 00:06:47.510 Transfer size: 4096 bytes 00:06:47.510 Vector count 1 00:06:47.510 Module: software 00:06:47.510 Queue depth: 64 00:06:47.510 Allocate depth: 64 00:06:47.510 # threads/core: 1 00:06:47.510 Run time: 1 seconds 00:06:47.510 Verify: Yes 00:06:47.510 00:06:47.510 Running for 1 seconds... 00:06:47.510 00:06:47.510 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.510 ------------------------------------------------------------------------------------ 00:06:47.510 0,0 523008/s 2043 MiB/s 0 0 00:06:47.510 ==================================================================================== 00:06:47.510 Total 523008/s 2043 MiB/s 0 0' 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.510 06:31:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.510 06:31:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.510 06:31:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.510 06:31:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.510 06:31:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.510 06:31:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.510 06:31:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.510 06:31:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.510 06:31:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.510 06:31:42 -- accel/accel.sh@42 -- # jq -r . 00:06:47.510 [2024-12-05 06:31:42.613134] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:47.510 [2024-12-05 06:31:42.613233] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68062 ] 00:06:47.510 [2024-12-05 06:31:42.745697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.510 [2024-12-05 06:31:42.775222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.510 06:31:42 -- accel/accel.sh@21 -- # val= 00:06:47.510 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.510 06:31:42 -- accel/accel.sh@21 -- # val= 00:06:47.510 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.510 06:31:42 -- accel/accel.sh@21 -- # val=0x1 00:06:47.510 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.510 06:31:42 -- accel/accel.sh@21 -- # val= 00:06:47.510 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.510 06:31:42 -- accel/accel.sh@21 -- # val= 00:06:47.510 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.510 06:31:42 -- accel/accel.sh@21 -- # val=fill 00:06:47.510 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.510 06:31:42 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.510 06:31:42 -- accel/accel.sh@21 -- # val=0x80 00:06:47.510 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.510 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.510 06:31:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.511 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.511 06:31:42 -- accel/accel.sh@21 -- # val= 00:06:47.511 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.511 06:31:42 -- accel/accel.sh@21 -- # val=software 00:06:47.511 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.511 06:31:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.511 06:31:42 -- accel/accel.sh@21 -- # val=64 00:06:47.511 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.511 06:31:42 -- accel/accel.sh@21 -- # val=64 00:06:47.511 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.511 06:31:42 -- accel/accel.sh@21 -- # val=1 00:06:47.511 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.511 06:31:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.511 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.511 06:31:42 -- accel/accel.sh@21 -- # val=Yes 00:06:47.511 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.511 06:31:42 -- accel/accel.sh@21 -- # val= 00:06:47.511 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.511 06:31:42 -- accel/accel.sh@21 -- # val= 00:06:47.511 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:47.511 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:48.449 06:31:43 -- accel/accel.sh@21 -- # val= 00:06:48.449 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.449 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.449 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.449 06:31:43 -- accel/accel.sh@21 -- # val= 00:06:48.449 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.449 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.449 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.449 06:31:43 -- accel/accel.sh@21 -- # val= 00:06:48.449 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.449 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.449 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.449 06:31:43 -- accel/accel.sh@21 -- # val= 00:06:48.449 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.449 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.449 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.449 06:31:43 -- accel/accel.sh@21 -- # val= 00:06:48.449 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.449 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.449 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.449 06:31:43 -- accel/accel.sh@21 -- # val= 00:06:48.449 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.449 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.449 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.449 06:31:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.449 06:31:43 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:48.449 06:31:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.449 00:06:48.449 real 0m2.603s 00:06:48.449 user 0m2.263s 00:06:48.449 sys 0m0.136s 00:06:48.449 ************************************ 00:06:48.449 END TEST accel_fill 00:06:48.449 ************************************ 00:06:48.449 06:31:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.449 06:31:43 -- common/autotest_common.sh@10 -- # set +x 00:06:48.708 06:31:43 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:48.708 06:31:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:48.708 06:31:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.708 06:31:43 -- common/autotest_common.sh@10 -- # set +x 00:06:48.708 ************************************ 00:06:48.708 START TEST accel_copy_crc32c 00:06:48.708 ************************************ 00:06:48.708 06:31:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:48.708 06:31:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.708 06:31:43 -- accel/accel.sh@17 -- # local accel_module 00:06:48.708 06:31:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:48.708 06:31:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:48.708 06:31:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.708 06:31:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.708 06:31:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.708 06:31:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.708 06:31:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.708 06:31:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.708 06:31:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.708 06:31:43 -- accel/accel.sh@42 -- # jq -r . 00:06:48.708 [2024-12-05 06:31:43.961737] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:48.708 [2024-12-05 06:31:43.961805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68097 ] 00:06:48.708 [2024-12-05 06:31:44.095484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.708 [2024-12-05 06:31:44.124745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.085 06:31:45 -- accel/accel.sh@18 -- # out=' 00:06:50.085 SPDK Configuration: 00:06:50.085 Core mask: 0x1 00:06:50.085 00:06:50.085 Accel Perf Configuration: 00:06:50.085 Workload Type: copy_crc32c 00:06:50.085 CRC-32C seed: 0 00:06:50.085 Vector size: 4096 bytes 00:06:50.085 Transfer size: 4096 bytes 00:06:50.085 Vector count 1 00:06:50.085 Module: software 00:06:50.085 Queue depth: 32 00:06:50.085 Allocate depth: 32 00:06:50.085 # threads/core: 1 00:06:50.085 Run time: 1 seconds 00:06:50.085 Verify: Yes 00:06:50.085 00:06:50.085 Running for 1 seconds... 00:06:50.085 00:06:50.085 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.085 ------------------------------------------------------------------------------------ 00:06:50.085 0,0 288640/s 1127 MiB/s 0 0 00:06:50.085 ==================================================================================== 00:06:50.085 Total 288640/s 1127 MiB/s 0 0' 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:50.085 06:31:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:50.085 06:31:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.085 06:31:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.085 06:31:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.085 06:31:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.085 06:31:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.085 06:31:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.085 06:31:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.085 06:31:45 -- accel/accel.sh@42 -- # jq -r . 00:06:50.085 [2024-12-05 06:31:45.263448] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:50.085 [2024-12-05 06:31:45.264093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68116 ] 00:06:50.085 [2024-12-05 06:31:45.395048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.085 [2024-12-05 06:31:45.423866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val= 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val= 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val=0x1 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val= 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val= 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.085 06:31:45 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val=0 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val= 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val=software 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.085 06:31:45 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val=32 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val=32 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val=1 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.085 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.085 06:31:45 -- accel/accel.sh@21 -- # val=Yes 00:06:50.085 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.086 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.086 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.086 06:31:45 -- accel/accel.sh@21 -- # val= 00:06:50.086 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.086 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.086 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:50.086 06:31:45 -- accel/accel.sh@21 -- # val= 00:06:50.086 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.086 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:50.086 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:51.462 06:31:46 -- accel/accel.sh@21 -- # val= 00:06:51.462 06:31:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.462 06:31:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.462 06:31:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.462 06:31:46 -- accel/accel.sh@21 -- # val= 00:06:51.462 06:31:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.462 06:31:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.462 06:31:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.462 06:31:46 -- accel/accel.sh@21 -- # val= 00:06:51.462 06:31:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.462 06:31:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.462 06:31:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.462 06:31:46 -- accel/accel.sh@21 -- # val= 00:06:51.462 06:31:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.462 06:31:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.462 06:31:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.462 06:31:46 -- accel/accel.sh@21 -- # val= 00:06:51.462 06:31:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.462 06:31:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.462 06:31:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.462 06:31:46 -- accel/accel.sh@21 -- # val= 00:06:51.462 06:31:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.462 06:31:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.462 06:31:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.462 06:31:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.462 06:31:46 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:51.462 06:31:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.462 00:06:51.462 real 0m2.607s 00:06:51.462 user 0m2.271s 00:06:51.462 sys 0m0.134s 00:06:51.462 06:31:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.462 06:31:46 -- common/autotest_common.sh@10 -- # set +x 00:06:51.462 ************************************ 00:06:51.462 END TEST accel_copy_crc32c 00:06:51.462 ************************************ 00:06:51.462 06:31:46 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:51.462 06:31:46 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:51.462 06:31:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.462 06:31:46 -- common/autotest_common.sh@10 -- # set +x 00:06:51.462 ************************************ 00:06:51.462 START TEST accel_copy_crc32c_C2 00:06:51.462 ************************************ 00:06:51.462 06:31:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:51.462 06:31:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.462 06:31:46 -- accel/accel.sh@17 -- # local accel_module 00:06:51.462 06:31:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:51.462 06:31:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:51.462 06:31:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.462 06:31:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.462 06:31:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.462 06:31:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.462 06:31:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.462 06:31:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.462 06:31:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.462 06:31:46 -- accel/accel.sh@42 -- # jq -r . 00:06:51.462 [2024-12-05 06:31:46.617740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:51.462 [2024-12-05 06:31:46.617837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68145 ] 00:06:51.462 [2024-12-05 06:31:46.754064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.462 [2024-12-05 06:31:46.785520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.844 06:31:47 -- accel/accel.sh@18 -- # out=' 00:06:52.844 SPDK Configuration: 00:06:52.844 Core mask: 0x1 00:06:52.844 00:06:52.844 Accel Perf Configuration: 00:06:52.844 Workload Type: copy_crc32c 00:06:52.844 CRC-32C seed: 0 00:06:52.844 Vector size: 4096 bytes 00:06:52.844 Transfer size: 8192 bytes 00:06:52.844 Vector count 2 00:06:52.844 Module: software 00:06:52.844 Queue depth: 32 00:06:52.844 Allocate depth: 32 00:06:52.844 # threads/core: 1 00:06:52.844 Run time: 1 seconds 00:06:52.844 Verify: Yes 00:06:52.844 00:06:52.844 Running for 1 seconds... 00:06:52.844 00:06:52.844 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.844 ------------------------------------------------------------------------------------ 00:06:52.844 0,0 209632/s 1637 MiB/s 0 0 00:06:52.844 ==================================================================================== 00:06:52.844 Total 209632/s 818 MiB/s 0 0' 00:06:52.844 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:52.844 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:52.844 06:31:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:52.844 06:31:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:52.844 06:31:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.844 06:31:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.844 06:31:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.844 06:31:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.844 06:31:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.844 06:31:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.844 06:31:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.844 06:31:47 -- accel/accel.sh@42 -- # jq -r . 00:06:52.844 [2024-12-05 06:31:47.922550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:52.844 [2024-12-05 06:31:47.922823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68165 ] 00:06:52.844 [2024-12-05 06:31:48.056008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.844 [2024-12-05 06:31:48.084666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.844 06:31:48 -- accel/accel.sh@21 -- # val= 00:06:52.844 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.844 06:31:48 -- accel/accel.sh@21 -- # val= 00:06:52.844 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.844 06:31:48 -- accel/accel.sh@21 -- # val=0x1 00:06:52.844 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.844 06:31:48 -- accel/accel.sh@21 -- # val= 00:06:52.844 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.844 06:31:48 -- accel/accel.sh@21 -- # val= 00:06:52.844 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.844 06:31:48 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:52.844 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.844 06:31:48 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.844 06:31:48 -- accel/accel.sh@21 -- # val=0 00:06:52.844 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.844 06:31:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.844 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.844 06:31:48 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:52.844 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.844 06:31:48 -- accel/accel.sh@21 -- # val= 00:06:52.844 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.844 06:31:48 -- accel/accel.sh@21 -- # val=software 00:06:52.844 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.844 06:31:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.844 06:31:48 -- accel/accel.sh@21 -- # val=32 00:06:52.844 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.844 06:31:48 -- accel/accel.sh@21 -- # val=32 00:06:52.844 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.844 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.844 06:31:48 -- accel/accel.sh@21 -- # val=1 00:06:52.845 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.845 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.845 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.845 06:31:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.845 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.845 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.845 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.845 06:31:48 -- accel/accel.sh@21 -- # val=Yes 00:06:52.845 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.845 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.845 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.845 06:31:48 -- accel/accel.sh@21 -- # val= 00:06:52.845 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.845 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.845 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.845 06:31:48 -- accel/accel.sh@21 -- # val= 00:06:52.845 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.845 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:52.845 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:53.783 06:31:49 -- accel/accel.sh@21 -- # val= 00:06:53.783 06:31:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.783 06:31:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.783 06:31:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.783 06:31:49 -- accel/accel.sh@21 -- # val= 00:06:53.783 06:31:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.783 06:31:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.783 06:31:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.783 06:31:49 -- accel/accel.sh@21 -- # val= 00:06:53.783 06:31:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.783 06:31:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.783 06:31:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.783 06:31:49 -- accel/accel.sh@21 -- # val= 00:06:53.783 06:31:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.783 06:31:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.783 06:31:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.783 06:31:49 -- accel/accel.sh@21 -- # val= 00:06:53.783 06:31:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.783 06:31:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.783 06:31:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.783 06:31:49 -- accel/accel.sh@21 -- # val= 00:06:53.783 06:31:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.783 06:31:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.783 06:31:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.783 06:31:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.783 06:31:49 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:53.783 06:31:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.783 00:06:53.783 real 0m2.604s 00:06:53.783 user 0m2.264s 00:06:53.783 sys 0m0.138s 00:06:53.783 ************************************ 00:06:53.783 END TEST accel_copy_crc32c_C2 00:06:53.783 ************************************ 00:06:53.783 06:31:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.783 06:31:49 -- common/autotest_common.sh@10 -- # set +x 00:06:53.783 06:31:49 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:53.783 06:31:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:53.783 06:31:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.783 06:31:49 -- common/autotest_common.sh@10 -- # set +x 00:06:54.042 ************************************ 00:06:54.042 START TEST accel_dualcast 00:06:54.042 ************************************ 00:06:54.042 06:31:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:54.042 06:31:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.042 06:31:49 -- accel/accel.sh@17 -- # local accel_module 00:06:54.042 06:31:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:54.042 06:31:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:54.042 06:31:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.042 06:31:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.042 06:31:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.042 06:31:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.042 06:31:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.042 06:31:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.042 06:31:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.042 06:31:49 -- accel/accel.sh@42 -- # jq -r . 00:06:54.042 [2024-12-05 06:31:49.272982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:54.042 [2024-12-05 06:31:49.273071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68198 ] 00:06:54.043 [2024-12-05 06:31:49.407413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.043 [2024-12-05 06:31:49.436265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.422 06:31:50 -- accel/accel.sh@18 -- # out=' 00:06:55.422 SPDK Configuration: 00:06:55.422 Core mask: 0x1 00:06:55.422 00:06:55.422 Accel Perf Configuration: 00:06:55.422 Workload Type: dualcast 00:06:55.422 Transfer size: 4096 bytes 00:06:55.422 Vector count 1 00:06:55.422 Module: software 00:06:55.422 Queue depth: 32 00:06:55.422 Allocate depth: 32 00:06:55.422 # threads/core: 1 00:06:55.422 Run time: 1 seconds 00:06:55.422 Verify: Yes 00:06:55.422 00:06:55.422 Running for 1 seconds... 00:06:55.422 00:06:55.422 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.422 ------------------------------------------------------------------------------------ 00:06:55.422 0,0 398976/s 1558 MiB/s 0 0 00:06:55.422 ==================================================================================== 00:06:55.422 Total 398976/s 1558 MiB/s 0 0' 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:55.422 06:31:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:55.422 06:31:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.422 06:31:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.422 06:31:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.422 06:31:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.422 06:31:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.422 06:31:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.422 06:31:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.422 06:31:50 -- accel/accel.sh@42 -- # jq -r . 00:06:55.422 [2024-12-05 06:31:50.574686] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:55.422 [2024-12-05 06:31:50.574770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68213 ] 00:06:55.422 [2024-12-05 06:31:50.709677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.422 [2024-12-05 06:31:50.738528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val= 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val= 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val=0x1 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val= 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val= 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val=dualcast 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val= 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val=software 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val=32 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val=32 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val=1 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val=Yes 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val= 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.422 06:31:50 -- accel/accel.sh@21 -- # val= 00:06:55.422 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.422 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:56.801 06:31:51 -- accel/accel.sh@21 -- # val= 00:06:56.801 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.801 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:06:56.801 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:06:56.801 06:31:51 -- accel/accel.sh@21 -- # val= 00:06:56.801 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.801 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:06:56.801 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:06:56.801 06:31:51 -- accel/accel.sh@21 -- # val= 00:06:56.801 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.801 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:06:56.801 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:06:56.801 06:31:51 -- accel/accel.sh@21 -- # val= 00:06:56.801 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.801 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:06:56.801 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:06:56.801 06:31:51 -- accel/accel.sh@21 -- # val= 00:06:56.801 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.801 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:06:56.801 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:06:56.801 06:31:51 -- accel/accel.sh@21 -- # val= 00:06:56.801 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.801 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:06:56.801 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:06:56.801 ************************************ 00:06:56.801 END TEST accel_dualcast 00:06:56.801 ************************************ 00:06:56.801 06:31:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.801 06:31:51 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:56.801 06:31:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.801 00:06:56.801 real 0m2.621s 00:06:56.801 user 0m2.277s 00:06:56.801 sys 0m0.140s 00:06:56.801 06:31:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.801 06:31:51 -- common/autotest_common.sh@10 -- # set +x 00:06:56.801 06:31:51 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:56.801 06:31:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:56.801 06:31:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.801 06:31:51 -- common/autotest_common.sh@10 -- # set +x 00:06:56.801 ************************************ 00:06:56.801 START TEST accel_compare 00:06:56.801 ************************************ 00:06:56.801 06:31:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:56.801 06:31:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.801 06:31:51 -- accel/accel.sh@17 -- # local accel_module 00:06:56.801 06:31:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:56.801 06:31:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:56.801 06:31:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.801 06:31:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.801 06:31:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.801 06:31:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.801 06:31:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.801 06:31:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.801 06:31:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.801 06:31:51 -- accel/accel.sh@42 -- # jq -r . 00:06:56.801 [2024-12-05 06:31:51.947133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:56.801 [2024-12-05 06:31:51.947214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68250 ] 00:06:56.801 [2024-12-05 06:31:52.075444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.801 [2024-12-05 06:31:52.104365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.181 06:31:53 -- accel/accel.sh@18 -- # out=' 00:06:58.181 SPDK Configuration: 00:06:58.181 Core mask: 0x1 00:06:58.181 00:06:58.181 Accel Perf Configuration: 00:06:58.181 Workload Type: compare 00:06:58.181 Transfer size: 4096 bytes 00:06:58.181 Vector count 1 00:06:58.181 Module: software 00:06:58.181 Queue depth: 32 00:06:58.181 Allocate depth: 32 00:06:58.181 # threads/core: 1 00:06:58.181 Run time: 1 seconds 00:06:58.181 Verify: Yes 00:06:58.181 00:06:58.181 Running for 1 seconds... 00:06:58.181 00:06:58.181 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.181 ------------------------------------------------------------------------------------ 00:06:58.181 0,0 525248/s 2051 MiB/s 0 0 00:06:58.181 ==================================================================================== 00:06:58.181 Total 525248/s 2051 MiB/s 0 0' 00:06:58.181 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.181 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.181 06:31:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:58.181 06:31:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:58.181 06:31:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.181 06:31:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.181 06:31:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.181 06:31:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.181 06:31:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.181 06:31:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.181 06:31:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.181 06:31:53 -- accel/accel.sh@42 -- # jq -r . 00:06:58.181 [2024-12-05 06:31:53.244348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:58.181 [2024-12-05 06:31:53.244440] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68264 ] 00:06:58.181 [2024-12-05 06:31:53.372199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.181 [2024-12-05 06:31:53.401218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.181 06:31:53 -- accel/accel.sh@21 -- # val= 00:06:58.181 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.181 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.181 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.181 06:31:53 -- accel/accel.sh@21 -- # val= 00:06:58.181 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.181 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.181 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.181 06:31:53 -- accel/accel.sh@21 -- # val=0x1 00:06:58.181 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.181 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.181 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.181 06:31:53 -- accel/accel.sh@21 -- # val= 00:06:58.181 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.181 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.181 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.181 06:31:53 -- accel/accel.sh@21 -- # val= 00:06:58.181 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.181 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.181 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.181 06:31:53 -- accel/accel.sh@21 -- # val=compare 00:06:58.182 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.182 06:31:53 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.182 06:31:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.182 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.182 06:31:53 -- accel/accel.sh@21 -- # val= 00:06:58.182 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.182 06:31:53 -- accel/accel.sh@21 -- # val=software 00:06:58.182 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.182 06:31:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.182 06:31:53 -- accel/accel.sh@21 -- # val=32 00:06:58.182 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.182 06:31:53 -- accel/accel.sh@21 -- # val=32 00:06:58.182 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.182 06:31:53 -- accel/accel.sh@21 -- # val=1 00:06:58.182 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.182 06:31:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.182 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.182 06:31:53 -- accel/accel.sh@21 -- # val=Yes 00:06:58.182 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.182 06:31:53 -- accel/accel.sh@21 -- # val= 00:06:58.182 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:58.182 06:31:53 -- accel/accel.sh@21 -- # val= 00:06:58.182 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:58.182 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:59.119 06:31:54 -- accel/accel.sh@21 -- # val= 00:06:59.119 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.119 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.119 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.119 06:31:54 -- accel/accel.sh@21 -- # val= 00:06:59.119 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.119 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.119 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.119 06:31:54 -- accel/accel.sh@21 -- # val= 00:06:59.119 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.119 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.119 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.119 06:31:54 -- accel/accel.sh@21 -- # val= 00:06:59.119 ************************************ 00:06:59.119 END TEST accel_compare 00:06:59.119 ************************************ 00:06:59.119 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.119 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.119 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.119 06:31:54 -- accel/accel.sh@21 -- # val= 00:06:59.119 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.119 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.119 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.119 06:31:54 -- accel/accel.sh@21 -- # val= 00:06:59.119 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.119 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.119 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.119 06:31:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.119 06:31:54 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:59.119 06:31:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.119 00:06:59.119 real 0m2.603s 00:06:59.119 user 0m2.274s 00:06:59.119 sys 0m0.129s 00:06:59.119 06:31:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.119 06:31:54 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 06:31:54 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:59.119 06:31:54 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:59.119 06:31:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.119 06:31:54 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 ************************************ 00:06:59.119 START TEST accel_xor 00:06:59.119 ************************************ 00:06:59.119 06:31:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:59.119 06:31:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.119 06:31:54 -- accel/accel.sh@17 -- # local accel_module 00:06:59.119 06:31:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:59.378 06:31:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:59.378 06:31:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.378 06:31:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.378 06:31:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.378 06:31:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.378 06:31:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.378 06:31:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.378 06:31:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.378 06:31:54 -- accel/accel.sh@42 -- # jq -r . 00:06:59.378 [2024-12-05 06:31:54.602491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:59.378 [2024-12-05 06:31:54.602592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68300 ] 00:06:59.378 [2024-12-05 06:31:54.737799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.378 [2024-12-05 06:31:54.769198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.756 06:31:55 -- accel/accel.sh@18 -- # out=' 00:07:00.756 SPDK Configuration: 00:07:00.756 Core mask: 0x1 00:07:00.756 00:07:00.756 Accel Perf Configuration: 00:07:00.756 Workload Type: xor 00:07:00.756 Source buffers: 2 00:07:00.756 Transfer size: 4096 bytes 00:07:00.756 Vector count 1 00:07:00.756 Module: software 00:07:00.756 Queue depth: 32 00:07:00.756 Allocate depth: 32 00:07:00.756 # threads/core: 1 00:07:00.756 Run time: 1 seconds 00:07:00.756 Verify: Yes 00:07:00.756 00:07:00.756 Running for 1 seconds... 00:07:00.756 00:07:00.756 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.756 ------------------------------------------------------------------------------------ 00:07:00.756 0,0 271872/s 1062 MiB/s 0 0 00:07:00.756 ==================================================================================== 00:07:00.756 Total 271872/s 1062 MiB/s 0 0' 00:07:00.756 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:07:00.756 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:07:00.756 06:31:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:00.756 06:31:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:00.756 06:31:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.756 06:31:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.756 06:31:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.756 06:31:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.756 06:31:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.756 06:31:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.756 06:31:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.756 06:31:55 -- accel/accel.sh@42 -- # jq -r . 00:07:00.756 [2024-12-05 06:31:55.915476] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:00.756 [2024-12-05 06:31:55.915560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68314 ] 00:07:00.756 [2024-12-05 06:31:56.048576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.756 [2024-12-05 06:31:56.078192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.756 06:31:56 -- accel/accel.sh@21 -- # val= 00:07:00.756 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.756 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.756 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.756 06:31:56 -- accel/accel.sh@21 -- # val= 00:07:00.756 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.756 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.757 06:31:56 -- accel/accel.sh@21 -- # val=0x1 00:07:00.757 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.757 06:31:56 -- accel/accel.sh@21 -- # val= 00:07:00.757 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.757 06:31:56 -- accel/accel.sh@21 -- # val= 00:07:00.757 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.757 06:31:56 -- accel/accel.sh@21 -- # val=xor 00:07:00.757 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.757 06:31:56 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.757 06:31:56 -- accel/accel.sh@21 -- # val=2 00:07:00.757 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.757 06:31:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.757 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.757 06:31:56 -- accel/accel.sh@21 -- # val= 00:07:00.757 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.757 06:31:56 -- accel/accel.sh@21 -- # val=software 00:07:00.757 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.757 06:31:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.757 06:31:56 -- accel/accel.sh@21 -- # val=32 00:07:00.757 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.757 06:31:56 -- accel/accel.sh@21 -- # val=32 00:07:00.757 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.757 06:31:56 -- accel/accel.sh@21 -- # val=1 00:07:00.757 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.757 06:31:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.757 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.757 06:31:56 -- accel/accel.sh@21 -- # val=Yes 00:07:00.757 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.757 06:31:56 -- accel/accel.sh@21 -- # val= 00:07:00.757 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:00.757 06:31:56 -- accel/accel.sh@21 -- # val= 00:07:00.757 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:07:00.757 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:07:02.137 06:31:57 -- accel/accel.sh@21 -- # val= 00:07:02.137 06:31:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.137 06:31:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.137 06:31:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.137 06:31:57 -- accel/accel.sh@21 -- # val= 00:07:02.137 06:31:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.137 06:31:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.137 06:31:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.137 06:31:57 -- accel/accel.sh@21 -- # val= 00:07:02.137 06:31:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.137 06:31:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.137 06:31:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.137 06:31:57 -- accel/accel.sh@21 -- # val= 00:07:02.137 06:31:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.137 06:31:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.137 06:31:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.137 06:31:57 -- accel/accel.sh@21 -- # val= 00:07:02.137 06:31:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.137 06:31:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.137 06:31:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.137 06:31:57 -- accel/accel.sh@21 -- # val= 00:07:02.137 06:31:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.137 06:31:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.137 06:31:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.137 06:31:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.137 06:31:57 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:02.137 06:31:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.137 00:07:02.137 real 0m2.618s 00:07:02.137 user 0m2.269s 00:07:02.137 sys 0m0.148s 00:07:02.137 06:31:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.137 ************************************ 00:07:02.137 END TEST accel_xor 00:07:02.137 ************************************ 00:07:02.137 06:31:57 -- common/autotest_common.sh@10 -- # set +x 00:07:02.137 06:31:57 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:02.137 06:31:57 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:02.137 06:31:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.137 06:31:57 -- common/autotest_common.sh@10 -- # set +x 00:07:02.137 ************************************ 00:07:02.137 START TEST accel_xor 00:07:02.137 ************************************ 00:07:02.137 06:31:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:02.137 06:31:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.137 06:31:57 -- accel/accel.sh@17 -- # local accel_module 00:07:02.137 06:31:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:02.137 06:31:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:02.137 06:31:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.137 06:31:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.137 06:31:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.137 06:31:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.137 06:31:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.137 06:31:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.137 06:31:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.137 06:31:57 -- accel/accel.sh@42 -- # jq -r . 00:07:02.137 [2024-12-05 06:31:57.265444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:02.137 [2024-12-05 06:31:57.265509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68350 ] 00:07:02.137 [2024-12-05 06:31:57.391208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.137 [2024-12-05 06:31:57.420471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.518 06:31:58 -- accel/accel.sh@18 -- # out=' 00:07:03.518 SPDK Configuration: 00:07:03.518 Core mask: 0x1 00:07:03.518 00:07:03.518 Accel Perf Configuration: 00:07:03.518 Workload Type: xor 00:07:03.518 Source buffers: 3 00:07:03.518 Transfer size: 4096 bytes 00:07:03.518 Vector count 1 00:07:03.518 Module: software 00:07:03.518 Queue depth: 32 00:07:03.518 Allocate depth: 32 00:07:03.518 # threads/core: 1 00:07:03.518 Run time: 1 seconds 00:07:03.518 Verify: Yes 00:07:03.518 00:07:03.518 Running for 1 seconds... 00:07:03.518 00:07:03.518 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.518 ------------------------------------------------------------------------------------ 00:07:03.518 0,0 265632/s 1037 MiB/s 0 0 00:07:03.518 ==================================================================================== 00:07:03.518 Total 265632/s 1037 MiB/s 0 0' 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.518 06:31:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:03.518 06:31:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:03.518 06:31:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.518 06:31:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.518 06:31:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.518 06:31:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.518 06:31:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.518 06:31:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.518 06:31:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.518 06:31:58 -- accel/accel.sh@42 -- # jq -r . 00:07:03.518 [2024-12-05 06:31:58.566272] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:03.518 [2024-12-05 06:31:58.566384] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68364 ] 00:07:03.518 [2024-12-05 06:31:58.699095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.518 [2024-12-05 06:31:58.730003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.518 06:31:58 -- accel/accel.sh@21 -- # val= 00:07:03.518 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.518 06:31:58 -- accel/accel.sh@21 -- # val= 00:07:03.518 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.518 06:31:58 -- accel/accel.sh@21 -- # val=0x1 00:07:03.518 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.518 06:31:58 -- accel/accel.sh@21 -- # val= 00:07:03.518 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.518 06:31:58 -- accel/accel.sh@21 -- # val= 00:07:03.518 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.518 06:31:58 -- accel/accel.sh@21 -- # val=xor 00:07:03.518 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.518 06:31:58 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.518 06:31:58 -- accel/accel.sh@21 -- # val=3 00:07:03.518 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.518 06:31:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.518 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.518 06:31:58 -- accel/accel.sh@21 -- # val= 00:07:03.518 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.518 06:31:58 -- accel/accel.sh@21 -- # val=software 00:07:03.518 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.518 06:31:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.518 06:31:58 -- accel/accel.sh@21 -- # val=32 00:07:03.518 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.518 06:31:58 -- accel/accel.sh@21 -- # val=32 00:07:03.518 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.518 06:31:58 -- accel/accel.sh@21 -- # val=1 00:07:03.518 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.518 06:31:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.518 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.518 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.519 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.519 06:31:58 -- accel/accel.sh@21 -- # val=Yes 00:07:03.519 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.519 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.519 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.519 06:31:58 -- accel/accel.sh@21 -- # val= 00:07:03.519 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.519 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.519 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:03.519 06:31:58 -- accel/accel.sh@21 -- # val= 00:07:03.519 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.519 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:07:03.519 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:07:04.456 06:31:59 -- accel/accel.sh@21 -- # val= 00:07:04.456 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.456 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:07:04.456 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:07:04.456 06:31:59 -- accel/accel.sh@21 -- # val= 00:07:04.456 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.456 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:07:04.456 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:07:04.456 06:31:59 -- accel/accel.sh@21 -- # val= 00:07:04.456 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.456 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:07:04.456 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:07:04.456 06:31:59 -- accel/accel.sh@21 -- # val= 00:07:04.456 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.456 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:07:04.456 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:07:04.456 06:31:59 -- accel/accel.sh@21 -- # val= 00:07:04.456 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.456 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:07:04.456 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:07:04.456 06:31:59 -- accel/accel.sh@21 -- # val= 00:07:04.456 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.456 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:07:04.456 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:07:04.456 06:31:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.456 06:31:59 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:04.457 06:31:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.457 00:07:04.457 real 0m2.596s 00:07:04.457 user 0m2.271s 00:07:04.457 sys 0m0.126s 00:07:04.457 06:31:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.457 ************************************ 00:07:04.457 END TEST accel_xor 00:07:04.457 ************************************ 00:07:04.457 06:31:59 -- common/autotest_common.sh@10 -- # set +x 00:07:04.457 06:31:59 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:04.457 06:31:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:04.457 06:31:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.457 06:31:59 -- common/autotest_common.sh@10 -- # set +x 00:07:04.457 ************************************ 00:07:04.457 START TEST accel_dif_verify 00:07:04.457 ************************************ 00:07:04.457 06:31:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:04.457 06:31:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.457 06:31:59 -- accel/accel.sh@17 -- # local accel_module 00:07:04.457 06:31:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:04.457 06:31:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:04.457 06:31:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.457 06:31:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.457 06:31:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.457 06:31:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.457 06:31:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.457 06:31:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.457 06:31:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.457 06:31:59 -- accel/accel.sh@42 -- # jq -r . 00:07:04.457 [2024-12-05 06:31:59.919849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:04.457 [2024-12-05 06:31:59.919936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68398 ] 00:07:04.716 [2024-12-05 06:32:00.056320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.716 [2024-12-05 06:32:00.091505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.095 06:32:01 -- accel/accel.sh@18 -- # out=' 00:07:06.095 SPDK Configuration: 00:07:06.095 Core mask: 0x1 00:07:06.095 00:07:06.095 Accel Perf Configuration: 00:07:06.095 Workload Type: dif_verify 00:07:06.095 Vector size: 4096 bytes 00:07:06.095 Transfer size: 4096 bytes 00:07:06.095 Block size: 512 bytes 00:07:06.095 Metadata size: 8 bytes 00:07:06.095 Vector count 1 00:07:06.095 Module: software 00:07:06.095 Queue depth: 32 00:07:06.095 Allocate depth: 32 00:07:06.095 # threads/core: 1 00:07:06.095 Run time: 1 seconds 00:07:06.095 Verify: No 00:07:06.095 00:07:06.095 Running for 1 seconds... 00:07:06.095 00:07:06.095 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.095 ------------------------------------------------------------------------------------ 00:07:06.095 0,0 113216/s 449 MiB/s 0 0 00:07:06.095 ==================================================================================== 00:07:06.095 Total 113216/s 442 MiB/s 0 0' 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:06.095 06:32:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:06.095 06:32:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.095 06:32:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.095 06:32:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.095 06:32:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.095 06:32:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.095 06:32:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.095 06:32:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.095 06:32:01 -- accel/accel.sh@42 -- # jq -r . 00:07:06.095 [2024-12-05 06:32:01.235583] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:06.095 [2024-12-05 06:32:01.235674] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68418 ] 00:07:06.095 [2024-12-05 06:32:01.368606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.095 [2024-12-05 06:32:01.400446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val= 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val= 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val=0x1 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val= 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val= 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val=dif_verify 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val= 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val=software 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val=32 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val=32 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val=1 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val=No 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val= 00:07:06.095 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.095 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.095 06:32:01 -- accel/accel.sh@21 -- # val= 00:07:06.096 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.096 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.096 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:07:07.473 06:32:02 -- accel/accel.sh@21 -- # val= 00:07:07.473 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.473 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:07:07.473 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:07:07.473 06:32:02 -- accel/accel.sh@21 -- # val= 00:07:07.473 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.473 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:07:07.473 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:07:07.473 06:32:02 -- accel/accel.sh@21 -- # val= 00:07:07.473 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.473 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:07:07.473 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:07:07.473 06:32:02 -- accel/accel.sh@21 -- # val= 00:07:07.473 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.473 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:07:07.473 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:07:07.473 06:32:02 -- accel/accel.sh@21 -- # val= 00:07:07.473 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.473 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:07:07.473 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:07:07.473 06:32:02 -- accel/accel.sh@21 -- # val= 00:07:07.473 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.473 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:07:07.473 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:07:07.473 06:32:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.473 06:32:02 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:07.473 06:32:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.473 00:07:07.473 real 0m2.625s 00:07:07.473 user 0m2.285s 00:07:07.473 sys 0m0.142s 00:07:07.473 06:32:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.473 ************************************ 00:07:07.473 END TEST accel_dif_verify 00:07:07.473 ************************************ 00:07:07.473 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:07:07.473 06:32:02 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:07.473 06:32:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:07.473 06:32:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.473 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:07:07.473 ************************************ 00:07:07.473 START TEST accel_dif_generate 00:07:07.473 ************************************ 00:07:07.473 06:32:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:07.473 06:32:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.473 06:32:02 -- accel/accel.sh@17 -- # local accel_module 00:07:07.473 06:32:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:07.473 06:32:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:07.473 06:32:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.473 06:32:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.473 06:32:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.473 06:32:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.473 06:32:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.473 06:32:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.473 06:32:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.473 06:32:02 -- accel/accel.sh@42 -- # jq -r . 00:07:07.473 [2024-12-05 06:32:02.598035] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:07.473 [2024-12-05 06:32:02.598125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68447 ] 00:07:07.473 [2024-12-05 06:32:02.725219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.473 [2024-12-05 06:32:02.755533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.409 06:32:03 -- accel/accel.sh@18 -- # out=' 00:07:08.409 SPDK Configuration: 00:07:08.409 Core mask: 0x1 00:07:08.409 00:07:08.409 Accel Perf Configuration: 00:07:08.409 Workload Type: dif_generate 00:07:08.409 Vector size: 4096 bytes 00:07:08.409 Transfer size: 4096 bytes 00:07:08.409 Block size: 512 bytes 00:07:08.409 Metadata size: 8 bytes 00:07:08.409 Vector count 1 00:07:08.409 Module: software 00:07:08.409 Queue depth: 32 00:07:08.409 Allocate depth: 32 00:07:08.409 # threads/core: 1 00:07:08.409 Run time: 1 seconds 00:07:08.409 Verify: No 00:07:08.409 00:07:08.409 Running for 1 seconds... 00:07:08.409 00:07:08.409 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.409 ------------------------------------------------------------------------------------ 00:07:08.409 0,0 141216/s 560 MiB/s 0 0 00:07:08.409 ==================================================================================== 00:07:08.409 Total 141216/s 551 MiB/s 0 0' 00:07:08.409 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.409 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:08.669 06:32:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:08.669 06:32:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.669 06:32:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.669 06:32:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.669 06:32:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.669 06:32:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.669 06:32:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.669 06:32:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.669 06:32:03 -- accel/accel.sh@42 -- # jq -r . 00:07:08.669 [2024-12-05 06:32:03.892016] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:08.669 [2024-12-05 06:32:03.892123] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68461 ] 00:07:08.669 [2024-12-05 06:32:04.030663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.669 [2024-12-05 06:32:04.060742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val= 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val= 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val=0x1 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val= 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val= 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val=dif_generate 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val= 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val=software 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val=32 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val=32 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val=1 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val=No 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val= 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:08.669 06:32:04 -- accel/accel.sh@21 -- # val= 00:07:08.669 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:07:08.669 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:07:10.047 06:32:05 -- accel/accel.sh@21 -- # val= 00:07:10.047 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.047 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:07:10.047 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:07:10.047 06:32:05 -- accel/accel.sh@21 -- # val= 00:07:10.047 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.047 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:07:10.047 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:07:10.047 06:32:05 -- accel/accel.sh@21 -- # val= 00:07:10.047 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.047 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:07:10.047 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:07:10.047 06:32:05 -- accel/accel.sh@21 -- # val= 00:07:10.047 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.047 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:07:10.047 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:07:10.047 06:32:05 -- accel/accel.sh@21 -- # val= 00:07:10.047 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.047 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:07:10.047 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:07:10.047 06:32:05 -- accel/accel.sh@21 -- # val= 00:07:10.047 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.047 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:07:10.047 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:07:10.047 ************************************ 00:07:10.047 END TEST accel_dif_generate 00:07:10.047 ************************************ 00:07:10.047 06:32:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.047 06:32:05 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:10.047 06:32:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.047 00:07:10.047 real 0m2.617s 00:07:10.047 user 0m2.274s 00:07:10.047 sys 0m0.142s 00:07:10.047 06:32:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.047 06:32:05 -- common/autotest_common.sh@10 -- # set +x 00:07:10.047 06:32:05 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:10.047 06:32:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:10.047 06:32:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.047 06:32:05 -- common/autotest_common.sh@10 -- # set +x 00:07:10.047 ************************************ 00:07:10.047 START TEST accel_dif_generate_copy 00:07:10.047 ************************************ 00:07:10.047 06:32:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:10.047 06:32:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.047 06:32:05 -- accel/accel.sh@17 -- # local accel_module 00:07:10.047 06:32:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:10.047 06:32:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:10.047 06:32:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.047 06:32:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.047 06:32:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.047 06:32:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.048 06:32:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.048 06:32:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.048 06:32:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.048 06:32:05 -- accel/accel.sh@42 -- # jq -r . 00:07:10.048 [2024-12-05 06:32:05.270173] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:10.048 [2024-12-05 06:32:05.270265] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68501 ] 00:07:10.048 [2024-12-05 06:32:05.406392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.048 [2024-12-05 06:32:05.436837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.424 06:32:06 -- accel/accel.sh@18 -- # out=' 00:07:11.424 SPDK Configuration: 00:07:11.424 Core mask: 0x1 00:07:11.424 00:07:11.424 Accel Perf Configuration: 00:07:11.424 Workload Type: dif_generate_copy 00:07:11.424 Vector size: 4096 bytes 00:07:11.424 Transfer size: 4096 bytes 00:07:11.424 Vector count 1 00:07:11.424 Module: software 00:07:11.424 Queue depth: 32 00:07:11.424 Allocate depth: 32 00:07:11.424 # threads/core: 1 00:07:11.424 Run time: 1 seconds 00:07:11.424 Verify: No 00:07:11.424 00:07:11.424 Running for 1 seconds... 00:07:11.424 00:07:11.424 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.424 ------------------------------------------------------------------------------------ 00:07:11.424 0,0 105760/s 419 MiB/s 0 0 00:07:11.424 ==================================================================================== 00:07:11.424 Total 105760/s 413 MiB/s 0 0' 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.424 06:32:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.424 06:32:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:11.424 06:32:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.424 06:32:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.424 06:32:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.424 06:32:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.424 06:32:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.424 06:32:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.424 06:32:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.424 06:32:06 -- accel/accel.sh@42 -- # jq -r . 00:07:11.424 [2024-12-05 06:32:06.580523] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:11.424 [2024-12-05 06:32:06.580638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68515 ] 00:07:11.424 [2024-12-05 06:32:06.714177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.424 [2024-12-05 06:32:06.748482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.424 06:32:06 -- accel/accel.sh@21 -- # val= 00:07:11.424 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.424 06:32:06 -- accel/accel.sh@21 -- # val= 00:07:11.424 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.424 06:32:06 -- accel/accel.sh@21 -- # val=0x1 00:07:11.424 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.424 06:32:06 -- accel/accel.sh@21 -- # val= 00:07:11.424 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.424 06:32:06 -- accel/accel.sh@21 -- # val= 00:07:11.424 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.424 06:32:06 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:11.424 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.424 06:32:06 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.424 06:32:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.424 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.424 06:32:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.424 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.424 06:32:06 -- accel/accel.sh@21 -- # val= 00:07:11.424 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.424 06:32:06 -- accel/accel.sh@21 -- # val=software 00:07:11.424 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.424 06:32:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.424 06:32:06 -- accel/accel.sh@21 -- # val=32 00:07:11.424 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.424 06:32:06 -- accel/accel.sh@21 -- # val=32 00:07:11.424 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.424 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.425 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.425 06:32:06 -- accel/accel.sh@21 -- # val=1 00:07:11.425 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.425 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.425 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.425 06:32:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.425 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.425 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.425 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.425 06:32:06 -- accel/accel.sh@21 -- # val=No 00:07:11.425 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.425 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.425 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.425 06:32:06 -- accel/accel.sh@21 -- # val= 00:07:11.425 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.425 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.425 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:11.425 06:32:06 -- accel/accel.sh@21 -- # val= 00:07:11.425 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.425 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:07:11.425 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.800 06:32:07 -- accel/accel.sh@21 -- # val= 00:07:12.800 06:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.800 06:32:07 -- accel/accel.sh@20 -- # IFS=: 00:07:12.800 06:32:07 -- accel/accel.sh@20 -- # read -r var val 00:07:12.800 06:32:07 -- accel/accel.sh@21 -- # val= 00:07:12.800 06:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.800 06:32:07 -- accel/accel.sh@20 -- # IFS=: 00:07:12.800 06:32:07 -- accel/accel.sh@20 -- # read -r var val 00:07:12.800 06:32:07 -- accel/accel.sh@21 -- # val= 00:07:12.800 06:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.800 06:32:07 -- accel/accel.sh@20 -- # IFS=: 00:07:12.800 06:32:07 -- accel/accel.sh@20 -- # read -r var val 00:07:12.800 06:32:07 -- accel/accel.sh@21 -- # val= 00:07:12.800 06:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.800 06:32:07 -- accel/accel.sh@20 -- # IFS=: 00:07:12.800 06:32:07 -- accel/accel.sh@20 -- # read -r var val 00:07:12.800 06:32:07 -- accel/accel.sh@21 -- # val= 00:07:12.800 06:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.800 06:32:07 -- accel/accel.sh@20 -- # IFS=: 00:07:12.800 06:32:07 -- accel/accel.sh@20 -- # read -r var val 00:07:12.800 06:32:07 -- accel/accel.sh@21 -- # val= 00:07:12.800 06:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.800 06:32:07 -- accel/accel.sh@20 -- # IFS=: 00:07:12.800 06:32:07 -- accel/accel.sh@20 -- # read -r var val 00:07:12.800 06:32:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.800 06:32:07 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:12.800 06:32:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.800 00:07:12.800 real 0m2.628s 00:07:12.800 user 0m2.291s 00:07:12.800 sys 0m0.136s 00:07:12.800 06:32:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.800 06:32:07 -- common/autotest_common.sh@10 -- # set +x 00:07:12.800 ************************************ 00:07:12.800 END TEST accel_dif_generate_copy 00:07:12.800 ************************************ 00:07:12.800 06:32:07 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:12.800 06:32:07 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.800 06:32:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:12.800 06:32:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.800 06:32:07 -- common/autotest_common.sh@10 -- # set +x 00:07:12.800 ************************************ 00:07:12.800 START TEST accel_comp 00:07:12.800 ************************************ 00:07:12.800 06:32:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.800 06:32:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.800 06:32:07 -- accel/accel.sh@17 -- # local accel_module 00:07:12.800 06:32:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.800 06:32:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.800 06:32:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.800 06:32:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.800 06:32:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.800 06:32:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.800 06:32:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.800 06:32:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.800 06:32:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.800 06:32:07 -- accel/accel.sh@42 -- # jq -r . 00:07:12.800 [2024-12-05 06:32:07.954794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:12.800 [2024-12-05 06:32:07.954901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68544 ] 00:07:12.800 [2024-12-05 06:32:08.091207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.800 [2024-12-05 06:32:08.123089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.853 06:32:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:13.853 00:07:13.853 SPDK Configuration: 00:07:13.853 Core mask: 0x1 00:07:13.853 00:07:13.854 Accel Perf Configuration: 00:07:13.854 Workload Type: compress 00:07:13.854 Transfer size: 4096 bytes 00:07:13.854 Vector count 1 00:07:13.854 Module: software 00:07:13.854 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.854 Queue depth: 32 00:07:13.854 Allocate depth: 32 00:07:13.854 # threads/core: 1 00:07:13.854 Run time: 1 seconds 00:07:13.854 Verify: No 00:07:13.854 00:07:13.854 Running for 1 seconds... 00:07:13.854 00:07:13.854 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.854 ------------------------------------------------------------------------------------ 00:07:13.854 0,0 55264/s 230 MiB/s 0 0 00:07:13.854 ==================================================================================== 00:07:13.854 Total 55264/s 215 MiB/s 0 0' 00:07:13.854 06:32:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.854 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:13.854 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:13.854 06:32:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.854 06:32:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.854 06:32:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.854 06:32:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.854 06:32:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.854 06:32:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.854 06:32:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.854 06:32:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.854 06:32:09 -- accel/accel.sh@42 -- # jq -r . 00:07:13.854 [2024-12-05 06:32:09.275108] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:13.854 [2024-12-05 06:32:09.275199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68569 ] 00:07:14.139 [2024-12-05 06:32:09.402187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.139 [2024-12-05 06:32:09.431450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val= 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val= 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val= 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val=0x1 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val= 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val= 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val=compress 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val= 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val=software 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val=32 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val=32 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val=1 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.139 06:32:09 -- accel/accel.sh@21 -- # val=No 00:07:14.139 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.139 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.140 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.140 06:32:09 -- accel/accel.sh@21 -- # val= 00:07:14.140 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.140 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.140 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:14.140 06:32:09 -- accel/accel.sh@21 -- # val= 00:07:14.140 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.140 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:07:14.140 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:15.519 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:15.519 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.519 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:15.519 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:15.519 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:15.519 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.519 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:15.519 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:15.519 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:15.519 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.519 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:15.519 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:15.519 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:15.519 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.519 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:15.519 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:15.519 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:15.519 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.519 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:15.519 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:15.519 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:15.519 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.519 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:15.519 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:15.519 06:32:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.519 06:32:10 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:15.519 06:32:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.519 00:07:15.519 real 0m2.623s 00:07:15.519 user 0m2.286s 00:07:15.519 sys 0m0.136s 00:07:15.519 06:32:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.519 06:32:10 -- common/autotest_common.sh@10 -- # set +x 00:07:15.519 ************************************ 00:07:15.519 END TEST accel_comp 00:07:15.519 ************************************ 00:07:15.519 06:32:10 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.519 06:32:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:15.519 06:32:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.519 06:32:10 -- common/autotest_common.sh@10 -- # set +x 00:07:15.519 ************************************ 00:07:15.519 START TEST accel_decomp 00:07:15.519 ************************************ 00:07:15.519 06:32:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.519 06:32:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.519 06:32:10 -- accel/accel.sh@17 -- # local accel_module 00:07:15.519 06:32:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.519 06:32:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.519 06:32:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.519 06:32:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.519 06:32:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.519 06:32:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.519 06:32:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.519 06:32:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.519 06:32:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.519 06:32:10 -- accel/accel.sh@42 -- # jq -r . 00:07:15.519 [2024-12-05 06:32:10.629566] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:15.520 [2024-12-05 06:32:10.629650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68598 ] 00:07:15.520 [2024-12-05 06:32:10.756510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.520 [2024-12-05 06:32:10.787513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.458 06:32:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:16.458 00:07:16.458 SPDK Configuration: 00:07:16.458 Core mask: 0x1 00:07:16.458 00:07:16.458 Accel Perf Configuration: 00:07:16.458 Workload Type: decompress 00:07:16.458 Transfer size: 4096 bytes 00:07:16.458 Vector count 1 00:07:16.458 Module: software 00:07:16.459 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.459 Queue depth: 32 00:07:16.459 Allocate depth: 32 00:07:16.459 # threads/core: 1 00:07:16.459 Run time: 1 seconds 00:07:16.459 Verify: Yes 00:07:16.459 00:07:16.459 Running for 1 seconds... 00:07:16.459 00:07:16.459 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.459 ------------------------------------------------------------------------------------ 00:07:16.459 0,0 79104/s 145 MiB/s 0 0 00:07:16.459 ==================================================================================== 00:07:16.459 Total 79104/s 309 MiB/s 0 0' 00:07:16.459 06:32:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.459 06:32:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:16.459 06:32:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.459 06:32:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:16.459 06:32:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.459 06:32:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.459 06:32:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.459 06:32:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.459 06:32:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.459 06:32:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.459 06:32:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.459 06:32:11 -- accel/accel.sh@42 -- # jq -r . 00:07:16.719 [2024-12-05 06:32:11.926765] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.719 [2024-12-05 06:32:11.926852] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68612 ] 00:07:16.719 [2024-12-05 06:32:12.053530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.719 [2024-12-05 06:32:12.082549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val=0x1 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val=decompress 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val=software 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val=32 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val=32 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val=1 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val=Yes 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:16.719 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:16.719 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:16.719 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:18.099 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:18.099 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.099 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:18.099 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:18.099 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:18.099 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.099 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:18.099 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:18.099 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:18.099 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.099 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:18.099 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:18.099 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:18.099 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.099 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:18.099 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:18.099 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:18.099 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.099 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:18.099 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:18.099 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:18.099 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.099 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:18.099 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:18.099 06:32:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:18.099 06:32:13 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:18.099 06:32:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.099 00:07:18.099 real 0m2.597s 00:07:18.099 user 0m2.260s 00:07:18.099 sys 0m0.139s 00:07:18.099 06:32:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.099 06:32:13 -- common/autotest_common.sh@10 -- # set +x 00:07:18.099 ************************************ 00:07:18.099 END TEST accel_decomp 00:07:18.099 ************************************ 00:07:18.099 06:32:13 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:18.099 06:32:13 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:18.100 06:32:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.100 06:32:13 -- common/autotest_common.sh@10 -- # set +x 00:07:18.100 ************************************ 00:07:18.100 START TEST accel_decmop_full 00:07:18.100 ************************************ 00:07:18.100 06:32:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:18.100 06:32:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.100 06:32:13 -- accel/accel.sh@17 -- # local accel_module 00:07:18.100 06:32:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:18.100 06:32:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:18.100 06:32:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.100 06:32:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.100 06:32:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.100 06:32:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.100 06:32:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.100 06:32:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.100 06:32:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.100 06:32:13 -- accel/accel.sh@42 -- # jq -r . 00:07:18.100 [2024-12-05 06:32:13.287803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:18.100 [2024-12-05 06:32:13.287891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68642 ] 00:07:18.100 [2024-12-05 06:32:13.424172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.100 [2024-12-05 06:32:13.455702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.481 06:32:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:19.481 00:07:19.481 SPDK Configuration: 00:07:19.481 Core mask: 0x1 00:07:19.481 00:07:19.481 Accel Perf Configuration: 00:07:19.481 Workload Type: decompress 00:07:19.481 Transfer size: 111250 bytes 00:07:19.481 Vector count 1 00:07:19.481 Module: software 00:07:19.481 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.481 Queue depth: 32 00:07:19.481 Allocate depth: 32 00:07:19.481 # threads/core: 1 00:07:19.481 Run time: 1 seconds 00:07:19.481 Verify: Yes 00:07:19.481 00:07:19.481 Running for 1 seconds... 00:07:19.481 00:07:19.481 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.481 ------------------------------------------------------------------------------------ 00:07:19.481 0,0 5312/s 219 MiB/s 0 0 00:07:19.481 ==================================================================================== 00:07:19.481 Total 5312/s 563 MiB/s 0 0' 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:19.481 06:32:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:19.481 06:32:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.481 06:32:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.481 06:32:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.481 06:32:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.481 06:32:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.481 06:32:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.481 06:32:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.481 06:32:14 -- accel/accel.sh@42 -- # jq -r . 00:07:19.481 [2024-12-05 06:32:14.593158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:19.481 [2024-12-05 06:32:14.593225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68666 ] 00:07:19.481 [2024-12-05 06:32:14.720118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.481 [2024-12-05 06:32:14.752401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val=0x1 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val=decompress 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val=software 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val=32 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val=32 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val=1 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val=Yes 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.481 06:32:14 -- accel/accel.sh@21 -- # val= 00:07:19.481 06:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.481 06:32:14 -- accel/accel.sh@20 -- # read -r var val 00:07:20.418 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:20.418 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.418 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:20.418 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:20.418 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:20.418 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.418 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:20.418 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:20.418 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:20.418 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.418 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:20.418 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:20.418 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:20.418 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.418 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:20.418 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:20.418 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:20.418 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.418 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:20.418 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:20.418 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:20.418 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.418 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:20.418 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:20.418 06:32:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.418 06:32:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:20.418 06:32:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.418 ************************************ 00:07:20.418 END TEST accel_decmop_full 00:07:20.418 ************************************ 00:07:20.418 00:07:20.418 real 0m2.616s 00:07:20.418 user 0m2.282s 00:07:20.418 sys 0m0.135s 00:07:20.418 06:32:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.418 06:32:15 -- common/autotest_common.sh@10 -- # set +x 00:07:20.678 06:32:15 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:20.678 06:32:15 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:20.678 06:32:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.678 06:32:15 -- common/autotest_common.sh@10 -- # set +x 00:07:20.678 ************************************ 00:07:20.678 START TEST accel_decomp_mcore 00:07:20.678 ************************************ 00:07:20.678 06:32:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:20.678 06:32:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.678 06:32:15 -- accel/accel.sh@17 -- # local accel_module 00:07:20.678 06:32:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:20.678 06:32:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:20.678 06:32:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.678 06:32:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.678 06:32:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.678 06:32:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.678 06:32:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.678 06:32:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.678 06:32:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.678 06:32:15 -- accel/accel.sh@42 -- # jq -r . 00:07:20.678 [2024-12-05 06:32:15.955468] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:20.678 [2024-12-05 06:32:15.955729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68695 ] 00:07:20.678 [2024-12-05 06:32:16.088591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.678 [2024-12-05 06:32:16.120506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.678 [2024-12-05 06:32:16.120639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.678 [2024-12-05 06:32:16.120755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.678 [2024-12-05 06:32:16.120755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.057 06:32:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:22.057 00:07:22.057 SPDK Configuration: 00:07:22.057 Core mask: 0xf 00:07:22.057 00:07:22.057 Accel Perf Configuration: 00:07:22.057 Workload Type: decompress 00:07:22.057 Transfer size: 4096 bytes 00:07:22.057 Vector count 1 00:07:22.057 Module: software 00:07:22.057 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.057 Queue depth: 32 00:07:22.057 Allocate depth: 32 00:07:22.057 # threads/core: 1 00:07:22.057 Run time: 1 seconds 00:07:22.057 Verify: Yes 00:07:22.057 00:07:22.057 Running for 1 seconds... 00:07:22.057 00:07:22.057 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.057 ------------------------------------------------------------------------------------ 00:07:22.057 0,0 64256/s 118 MiB/s 0 0 00:07:22.057 3,0 61056/s 112 MiB/s 0 0 00:07:22.057 2,0 61184/s 112 MiB/s 0 0 00:07:22.057 1,0 61888/s 114 MiB/s 0 0 00:07:22.057 ==================================================================================== 00:07:22.057 Total 248384/s 970 MiB/s 0 0' 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:22.057 06:32:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:22.057 06:32:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.057 06:32:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.057 06:32:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.057 06:32:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.057 06:32:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.057 06:32:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.057 06:32:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.057 06:32:17 -- accel/accel.sh@42 -- # jq -r . 00:07:22.057 [2024-12-05 06:32:17.274476] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:22.057 [2024-12-05 06:32:17.274580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68712 ] 00:07:22.057 [2024-12-05 06:32:17.408274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.057 [2024-12-05 06:32:17.439718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.057 [2024-12-05 06:32:17.439855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.057 [2024-12-05 06:32:17.439949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.057 [2024-12-05 06:32:17.440255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val=0xf 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val=decompress 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val=software 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val=32 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val=32 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val=1 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val=Yes 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.057 06:32:17 -- accel/accel.sh@21 -- # val= 00:07:22.057 06:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.057 06:32:17 -- accel/accel.sh@20 -- # read -r var val 00:07:23.436 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:23.436 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.436 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:23.436 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:23.436 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:23.436 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.436 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:23.436 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:23.436 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:23.437 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.437 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:23.437 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:23.437 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:23.437 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.437 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:23.437 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:23.437 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:23.437 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.437 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:23.437 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:23.437 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:23.437 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.437 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:23.437 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:23.437 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:23.437 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.437 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:23.437 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:23.437 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:23.437 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.437 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:23.437 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:23.437 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:23.437 ************************************ 00:07:23.437 END TEST accel_decomp_mcore 00:07:23.437 ************************************ 00:07:23.437 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.437 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:23.437 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:23.437 06:32:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.437 06:32:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:23.437 06:32:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.437 00:07:23.437 real 0m2.640s 00:07:23.437 user 0m8.687s 00:07:23.437 sys 0m0.168s 00:07:23.437 06:32:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.437 06:32:18 -- common/autotest_common.sh@10 -- # set +x 00:07:23.437 06:32:18 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:23.437 06:32:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:23.437 06:32:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.437 06:32:18 -- common/autotest_common.sh@10 -- # set +x 00:07:23.437 ************************************ 00:07:23.437 START TEST accel_decomp_full_mcore 00:07:23.437 ************************************ 00:07:23.437 06:32:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:23.437 06:32:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.437 06:32:18 -- accel/accel.sh@17 -- # local accel_module 00:07:23.437 06:32:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:23.437 06:32:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:23.437 06:32:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.437 06:32:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.437 06:32:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.437 06:32:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.437 06:32:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.437 06:32:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.437 06:32:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.437 06:32:18 -- accel/accel.sh@42 -- # jq -r . 00:07:23.437 [2024-12-05 06:32:18.649119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.437 [2024-12-05 06:32:18.649205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68750 ] 00:07:23.437 [2024-12-05 06:32:18.783878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.437 [2024-12-05 06:32:18.815682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.437 [2024-12-05 06:32:18.815816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.437 [2024-12-05 06:32:18.815909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.437 [2024-12-05 06:32:18.815911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.817 06:32:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:24.817 00:07:24.817 SPDK Configuration: 00:07:24.817 Core mask: 0xf 00:07:24.817 00:07:24.817 Accel Perf Configuration: 00:07:24.817 Workload Type: decompress 00:07:24.817 Transfer size: 111250 bytes 00:07:24.817 Vector count 1 00:07:24.817 Module: software 00:07:24.817 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:24.817 Queue depth: 32 00:07:24.817 Allocate depth: 32 00:07:24.817 # threads/core: 1 00:07:24.817 Run time: 1 seconds 00:07:24.817 Verify: Yes 00:07:24.817 00:07:24.817 Running for 1 seconds... 00:07:24.817 00:07:24.817 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.817 ------------------------------------------------------------------------------------ 00:07:24.817 0,0 4832/s 199 MiB/s 0 0 00:07:24.817 3,0 4800/s 198 MiB/s 0 0 00:07:24.817 2,0 4864/s 200 MiB/s 0 0 00:07:24.817 1,0 4864/s 200 MiB/s 0 0 00:07:24.817 ==================================================================================== 00:07:24.817 Total 19360/s 2054 MiB/s 0 0' 00:07:24.817 06:32:19 -- accel/accel.sh@20 -- # IFS=: 00:07:24.817 06:32:19 -- accel/accel.sh@20 -- # read -r var val 00:07:24.817 06:32:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.817 06:32:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.817 06:32:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.817 06:32:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.817 06:32:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.817 06:32:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.817 06:32:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.817 06:32:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.817 06:32:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.817 06:32:19 -- accel/accel.sh@42 -- # jq -r . 00:07:24.817 [2024-12-05 06:32:19.975002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:24.817 [2024-12-05 06:32:19.975088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68772 ] 00:07:24.817 [2024-12-05 06:32:20.104499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.817 [2024-12-05 06:32:20.136071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.817 [2024-12-05 06:32:20.136200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.817 [2024-12-05 06:32:20.136279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.817 [2024-12-05 06:32:20.136636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.817 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:24.817 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.817 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:24.817 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.817 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:24.817 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.817 06:32:20 -- accel/accel.sh@21 -- # val=0xf 00:07:24.817 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.817 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:24.817 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.817 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:24.817 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.817 06:32:20 -- accel/accel.sh@21 -- # val=decompress 00:07:24.817 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.817 06:32:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.817 06:32:20 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:24.817 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.817 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:24.817 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.817 06:32:20 -- accel/accel.sh@21 -- # val=software 00:07:24.817 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.817 06:32:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.817 06:32:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:24.817 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.817 06:32:20 -- accel/accel.sh@21 -- # val=32 00:07:24.817 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.817 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.817 06:32:20 -- accel/accel.sh@21 -- # val=32 00:07:24.818 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.818 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.818 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.818 06:32:20 -- accel/accel.sh@21 -- # val=1 00:07:24.818 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.818 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.818 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.818 06:32:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.818 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.818 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.818 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.818 06:32:20 -- accel/accel.sh@21 -- # val=Yes 00:07:24.818 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.818 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.818 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.818 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:24.818 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.818 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.818 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:24.818 06:32:20 -- accel/accel.sh@21 -- # val= 00:07:24.818 06:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.818 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:24.818 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.199 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:26.199 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.199 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.199 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.199 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:26.199 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.199 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.199 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.199 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:26.199 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.199 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.199 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.199 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:26.199 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.199 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.199 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.199 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:26.199 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.199 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.199 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.199 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:26.200 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.200 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.200 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.200 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:26.200 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.200 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.200 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.200 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:26.200 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.200 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.200 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.200 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:26.200 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.200 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.200 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.200 06:32:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.200 06:32:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:26.200 06:32:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.200 00:07:26.200 real 0m2.657s 00:07:26.200 user 0m8.778s 00:07:26.200 sys 0m0.157s 00:07:26.200 06:32:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.200 06:32:21 -- common/autotest_common.sh@10 -- # set +x 00:07:26.200 ************************************ 00:07:26.200 END TEST accel_decomp_full_mcore 00:07:26.200 ************************************ 00:07:26.200 06:32:21 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:26.200 06:32:21 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:26.200 06:32:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.200 06:32:21 -- common/autotest_common.sh@10 -- # set +x 00:07:26.200 ************************************ 00:07:26.200 START TEST accel_decomp_mthread 00:07:26.200 ************************************ 00:07:26.200 06:32:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:26.200 06:32:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.200 06:32:21 -- accel/accel.sh@17 -- # local accel_module 00:07:26.200 06:32:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:26.200 06:32:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:26.200 06:32:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.200 06:32:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.200 06:32:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.200 06:32:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.200 06:32:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.200 06:32:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.200 06:32:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.200 06:32:21 -- accel/accel.sh@42 -- # jq -r . 00:07:26.200 [2024-12-05 06:32:21.353196] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:26.200 [2024-12-05 06:32:21.353276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68804 ] 00:07:26.200 [2024-12-05 06:32:21.487412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.200 [2024-12-05 06:32:21.517241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.580 06:32:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:27.580 00:07:27.580 SPDK Configuration: 00:07:27.581 Core mask: 0x1 00:07:27.581 00:07:27.581 Accel Perf Configuration: 00:07:27.581 Workload Type: decompress 00:07:27.581 Transfer size: 4096 bytes 00:07:27.581 Vector count 1 00:07:27.581 Module: software 00:07:27.581 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.581 Queue depth: 32 00:07:27.581 Allocate depth: 32 00:07:27.581 # threads/core: 2 00:07:27.581 Run time: 1 seconds 00:07:27.581 Verify: Yes 00:07:27.581 00:07:27.581 Running for 1 seconds... 00:07:27.581 00:07:27.581 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.581 ------------------------------------------------------------------------------------ 00:07:27.581 0,1 40320/s 74 MiB/s 0 0 00:07:27.581 0,0 40224/s 74 MiB/s 0 0 00:07:27.581 ==================================================================================== 00:07:27.581 Total 80544/s 314 MiB/s 0 0' 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:27.581 06:32:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.581 06:32:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.581 06:32:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.581 06:32:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.581 06:32:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.581 06:32:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.581 06:32:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.581 06:32:22 -- accel/accel.sh@42 -- # jq -r . 00:07:27.581 [2024-12-05 06:32:22.659390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:27.581 [2024-12-05 06:32:22.660053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68824 ] 00:07:27.581 [2024-12-05 06:32:22.793523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.581 [2024-12-05 06:32:22.823048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val= 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val= 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val= 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val=0x1 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val= 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val= 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val=decompress 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val= 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val=software 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val=32 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val=32 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val=2 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val=Yes 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val= 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:27.581 06:32:22 -- accel/accel.sh@21 -- # val= 00:07:27.581 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:27.581 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:28.520 06:32:23 -- accel/accel.sh@21 -- # val= 00:07:28.520 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.520 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.520 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.520 06:32:23 -- accel/accel.sh@21 -- # val= 00:07:28.520 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.520 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.520 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.520 06:32:23 -- accel/accel.sh@21 -- # val= 00:07:28.520 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.520 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.520 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.520 06:32:23 -- accel/accel.sh@21 -- # val= 00:07:28.520 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.520 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.520 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.520 06:32:23 -- accel/accel.sh@21 -- # val= 00:07:28.520 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.520 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.520 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.520 06:32:23 -- accel/accel.sh@21 -- # val= 00:07:28.520 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.520 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.520 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.520 06:32:23 -- accel/accel.sh@21 -- # val= 00:07:28.520 06:32:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.520 06:32:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.520 06:32:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.520 ************************************ 00:07:28.520 END TEST accel_decomp_mthread 00:07:28.520 ************************************ 00:07:28.520 06:32:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.520 06:32:23 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:28.520 06:32:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.520 00:07:28.520 real 0m2.627s 00:07:28.520 user 0m2.285s 00:07:28.520 sys 0m0.143s 00:07:28.520 06:32:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.520 06:32:23 -- common/autotest_common.sh@10 -- # set +x 00:07:28.780 06:32:23 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:28.780 06:32:23 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:28.780 06:32:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.780 06:32:23 -- common/autotest_common.sh@10 -- # set +x 00:07:28.780 ************************************ 00:07:28.780 START TEST accel_deomp_full_mthread 00:07:28.780 ************************************ 00:07:28.780 06:32:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:28.780 06:32:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.780 06:32:24 -- accel/accel.sh@17 -- # local accel_module 00:07:28.780 06:32:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:28.780 06:32:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:28.780 06:32:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.780 06:32:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.780 06:32:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.780 06:32:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.780 06:32:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.780 06:32:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.780 06:32:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.780 06:32:24 -- accel/accel.sh@42 -- # jq -r . 00:07:28.780 [2024-12-05 06:32:24.027412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.780 [2024-12-05 06:32:24.027498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68858 ] 00:07:28.780 [2024-12-05 06:32:24.163086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.780 [2024-12-05 06:32:24.195073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.160 06:32:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:30.160 00:07:30.160 SPDK Configuration: 00:07:30.160 Core mask: 0x1 00:07:30.160 00:07:30.161 Accel Perf Configuration: 00:07:30.161 Workload Type: decompress 00:07:30.161 Transfer size: 111250 bytes 00:07:30.161 Vector count 1 00:07:30.161 Module: software 00:07:30.161 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.161 Queue depth: 32 00:07:30.161 Allocate depth: 32 00:07:30.161 # threads/core: 2 00:07:30.161 Run time: 1 seconds 00:07:30.161 Verify: Yes 00:07:30.161 00:07:30.161 Running for 1 seconds... 00:07:30.161 00:07:30.161 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.161 ------------------------------------------------------------------------------------ 00:07:30.161 0,1 2720/s 112 MiB/s 0 0 00:07:30.161 0,0 2720/s 112 MiB/s 0 0 00:07:30.161 ==================================================================================== 00:07:30.161 Total 5440/s 577 MiB/s 0 0' 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:30.161 06:32:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:30.161 06:32:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.161 06:32:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.161 06:32:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.161 06:32:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.161 06:32:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.161 06:32:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.161 06:32:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.161 06:32:25 -- accel/accel.sh@42 -- # jq -r . 00:07:30.161 [2024-12-05 06:32:25.347265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:30.161 [2024-12-05 06:32:25.347373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68872 ] 00:07:30.161 [2024-12-05 06:32:25.475855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.161 [2024-12-05 06:32:25.505277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val=0x1 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val=decompress 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val=software 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val=32 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val=32 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val=2 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val=Yes 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:30.161 06:32:25 -- accel/accel.sh@21 -- # val= 00:07:30.161 06:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # IFS=: 00:07:30.161 06:32:25 -- accel/accel.sh@20 -- # read -r var val 00:07:31.539 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:31.539 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.539 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.539 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.539 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:31.539 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.539 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.539 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.539 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:31.539 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.539 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.539 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.539 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:31.539 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.539 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.539 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.539 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:31.539 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.539 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.539 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.539 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:31.539 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.539 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.539 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.539 06:32:26 -- accel/accel.sh@21 -- # val= 00:07:31.539 06:32:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.540 06:32:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.540 06:32:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.540 06:32:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.540 06:32:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:31.540 06:32:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.540 00:07:31.540 real 0m2.650s 00:07:31.540 user 0m2.312s 00:07:31.540 sys 0m0.136s 00:07:31.540 06:32:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.540 06:32:26 -- common/autotest_common.sh@10 -- # set +x 00:07:31.540 ************************************ 00:07:31.540 END TEST accel_deomp_full_mthread 00:07:31.540 ************************************ 00:07:31.540 06:32:26 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:31.540 06:32:26 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:31.540 06:32:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:31.540 06:32:26 -- accel/accel.sh@129 -- # build_accel_config 00:07:31.540 06:32:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.540 06:32:26 -- common/autotest_common.sh@10 -- # set +x 00:07:31.540 06:32:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.540 06:32:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.540 06:32:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.540 06:32:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.540 06:32:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.540 06:32:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.540 06:32:26 -- accel/accel.sh@42 -- # jq -r . 00:07:31.540 ************************************ 00:07:31.540 START TEST accel_dif_functional_tests 00:07:31.540 ************************************ 00:07:31.540 06:32:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:31.540 [2024-12-05 06:32:26.757986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:31.540 [2024-12-05 06:32:26.758266] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68908 ] 00:07:31.540 [2024-12-05 06:32:26.894669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.540 [2024-12-05 06:32:26.925467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.540 [2024-12-05 06:32:26.925613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.540 [2024-12-05 06:32:26.925620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.540 00:07:31.540 00:07:31.540 CUnit - A unit testing framework for C - Version 2.1-3 00:07:31.540 http://cunit.sourceforge.net/ 00:07:31.540 00:07:31.540 00:07:31.540 Suite: accel_dif 00:07:31.540 Test: verify: DIF generated, GUARD check ...passed 00:07:31.540 Test: verify: DIF generated, APPTAG check ...passed 00:07:31.540 Test: verify: DIF generated, REFTAG check ...passed 00:07:31.540 Test: verify: DIF not generated, GUARD check ...passed 00:07:31.540 Test: verify: DIF not generated, APPTAG check ...[2024-12-05 06:32:26.969579] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:31.540 [2024-12-05 06:32:26.969659] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:31.540 [2024-12-05 06:32:26.969693] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:31.540 passed 00:07:31.540 Test: verify: DIF not generated, REFTAG check ...[2024-12-05 06:32:26.969986] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:31.540 [2024-12-05 06:32:26.970022] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:31.540 passed 00:07:31.540 Test: verify: APPTAG correct, APPTAG check ...[2024-12-05 06:32:26.970075] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:31.540 passed 00:07:31.540 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-05 06:32:26.970401] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:31.540 passed 00:07:31.540 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:31.540 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:31.540 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:31.540 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-05 06:32:26.970912] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:31.540 passed 00:07:31.540 Test: generate copy: DIF generated, GUARD check ...passed 00:07:31.540 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:31.540 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:31.540 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:31.540 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:31.540 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:31.540 Test: generate copy: iovecs-len validate ...passed 00:07:31.540 Test: generate copy: buffer alignment validate ...[2024-12-05 06:32:26.971686] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:31.540 passed 00:07:31.540 00:07:31.540 Run Summary: Type Total Ran Passed Failed Inactive 00:07:31.540 suites 1 1 n/a 0 0 00:07:31.540 tests 20 20 20 0 0 00:07:31.540 asserts 204 204 204 0 n/a 00:07:31.540 00:07:31.540 Elapsed time = 0.007 seconds 00:07:31.800 00:07:31.800 real 0m0.382s 00:07:31.800 user 0m0.436s 00:07:31.800 sys 0m0.095s 00:07:31.800 06:32:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.800 ************************************ 00:07:31.800 END TEST accel_dif_functional_tests 00:07:31.800 ************************************ 00:07:31.800 06:32:27 -- common/autotest_common.sh@10 -- # set +x 00:07:31.800 ************************************ 00:07:31.800 END TEST accel 00:07:31.800 ************************************ 00:07:31.800 00:07:31.800 real 0m56.303s 00:07:31.800 user 1m1.569s 00:07:31.800 sys 0m4.124s 00:07:31.800 06:32:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.800 06:32:27 -- common/autotest_common.sh@10 -- # set +x 00:07:31.800 06:32:27 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:31.800 06:32:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:31.800 06:32:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.800 06:32:27 -- common/autotest_common.sh@10 -- # set +x 00:07:31.800 ************************************ 00:07:31.800 START TEST accel_rpc 00:07:31.800 ************************************ 00:07:31.800 06:32:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:31.800 * Looking for test storage... 00:07:31.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:31.800 06:32:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:32.060 06:32:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:32.060 06:32:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:32.060 06:32:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:32.060 06:32:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:32.060 06:32:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:32.060 06:32:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:32.060 06:32:27 -- scripts/common.sh@335 -- # IFS=.-: 00:07:32.060 06:32:27 -- scripts/common.sh@335 -- # read -ra ver1 00:07:32.060 06:32:27 -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.060 06:32:27 -- scripts/common.sh@336 -- # read -ra ver2 00:07:32.060 06:32:27 -- scripts/common.sh@337 -- # local 'op=<' 00:07:32.060 06:32:27 -- scripts/common.sh@339 -- # ver1_l=2 00:07:32.060 06:32:27 -- scripts/common.sh@340 -- # ver2_l=1 00:07:32.060 06:32:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:32.060 06:32:27 -- scripts/common.sh@343 -- # case "$op" in 00:07:32.060 06:32:27 -- scripts/common.sh@344 -- # : 1 00:07:32.060 06:32:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:32.060 06:32:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.060 06:32:27 -- scripts/common.sh@364 -- # decimal 1 00:07:32.060 06:32:27 -- scripts/common.sh@352 -- # local d=1 00:07:32.060 06:32:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.060 06:32:27 -- scripts/common.sh@354 -- # echo 1 00:07:32.060 06:32:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:32.060 06:32:27 -- scripts/common.sh@365 -- # decimal 2 00:07:32.060 06:32:27 -- scripts/common.sh@352 -- # local d=2 00:07:32.060 06:32:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.060 06:32:27 -- scripts/common.sh@354 -- # echo 2 00:07:32.060 06:32:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:32.060 06:32:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:32.060 06:32:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:32.060 06:32:27 -- scripts/common.sh@367 -- # return 0 00:07:32.060 06:32:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.060 06:32:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:32.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.060 --rc genhtml_branch_coverage=1 00:07:32.060 --rc genhtml_function_coverage=1 00:07:32.060 --rc genhtml_legend=1 00:07:32.060 --rc geninfo_all_blocks=1 00:07:32.060 --rc geninfo_unexecuted_blocks=1 00:07:32.060 00:07:32.060 ' 00:07:32.060 06:32:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:32.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.060 --rc genhtml_branch_coverage=1 00:07:32.060 --rc genhtml_function_coverage=1 00:07:32.060 --rc genhtml_legend=1 00:07:32.060 --rc geninfo_all_blocks=1 00:07:32.060 --rc geninfo_unexecuted_blocks=1 00:07:32.060 00:07:32.060 ' 00:07:32.060 06:32:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:32.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.060 --rc genhtml_branch_coverage=1 00:07:32.060 --rc genhtml_function_coverage=1 00:07:32.060 --rc genhtml_legend=1 00:07:32.060 --rc geninfo_all_blocks=1 00:07:32.060 --rc geninfo_unexecuted_blocks=1 00:07:32.060 00:07:32.060 ' 00:07:32.060 06:32:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:32.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.060 --rc genhtml_branch_coverage=1 00:07:32.060 --rc genhtml_function_coverage=1 00:07:32.060 --rc genhtml_legend=1 00:07:32.060 --rc geninfo_all_blocks=1 00:07:32.060 --rc geninfo_unexecuted_blocks=1 00:07:32.060 00:07:32.060 ' 00:07:32.060 06:32:27 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:32.060 06:32:27 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:32.060 06:32:27 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=68979 00:07:32.060 06:32:27 -- accel/accel_rpc.sh@15 -- # waitforlisten 68979 00:07:32.060 06:32:27 -- common/autotest_common.sh@829 -- # '[' -z 68979 ']' 00:07:32.060 06:32:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.060 06:32:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.060 06:32:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.060 06:32:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.060 06:32:27 -- common/autotest_common.sh@10 -- # set +x 00:07:32.060 [2024-12-05 06:32:27.426531] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.060 [2024-12-05 06:32:27.427394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68979 ] 00:07:32.320 [2024-12-05 06:32:27.558504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.320 [2024-12-05 06:32:27.592182] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:32.320 [2024-12-05 06:32:27.592384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.258 06:32:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.258 06:32:28 -- common/autotest_common.sh@862 -- # return 0 00:07:33.258 06:32:28 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:33.258 06:32:28 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:33.258 06:32:28 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:33.258 06:32:28 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:33.258 06:32:28 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:33.258 06:32:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:33.258 06:32:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.258 06:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:33.258 ************************************ 00:07:33.258 START TEST accel_assign_opcode 00:07:33.258 ************************************ 00:07:33.258 06:32:28 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:33.258 06:32:28 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:33.258 06:32:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.258 06:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:33.258 [2024-12-05 06:32:28.404831] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:33.258 06:32:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.258 06:32:28 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:33.259 06:32:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.259 06:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:33.259 [2024-12-05 06:32:28.412825] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:33.259 06:32:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.259 06:32:28 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:33.259 06:32:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.259 06:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:33.259 06:32:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.259 06:32:28 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:33.259 06:32:28 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:33.259 06:32:28 -- accel/accel_rpc.sh@42 -- # grep software 00:07:33.259 06:32:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.259 06:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:33.259 06:32:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.259 software 00:07:33.259 ************************************ 00:07:33.259 END TEST accel_assign_opcode 00:07:33.259 ************************************ 00:07:33.259 00:07:33.259 real 0m0.186s 00:07:33.259 user 0m0.057s 00:07:33.259 sys 0m0.011s 00:07:33.259 06:32:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.259 06:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:33.259 06:32:28 -- accel/accel_rpc.sh@55 -- # killprocess 68979 00:07:33.259 06:32:28 -- common/autotest_common.sh@936 -- # '[' -z 68979 ']' 00:07:33.259 06:32:28 -- common/autotest_common.sh@940 -- # kill -0 68979 00:07:33.259 06:32:28 -- common/autotest_common.sh@941 -- # uname 00:07:33.259 06:32:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:33.259 06:32:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68979 00:07:33.259 killing process with pid 68979 00:07:33.259 06:32:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:33.259 06:32:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:33.259 06:32:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68979' 00:07:33.259 06:32:28 -- common/autotest_common.sh@955 -- # kill 68979 00:07:33.259 06:32:28 -- common/autotest_common.sh@960 -- # wait 68979 00:07:33.518 ************************************ 00:07:33.518 END TEST accel_rpc 00:07:33.518 ************************************ 00:07:33.518 00:07:33.518 real 0m1.683s 00:07:33.518 user 0m1.870s 00:07:33.518 sys 0m0.333s 00:07:33.518 06:32:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.518 06:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:33.518 06:32:28 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:33.518 06:32:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:33.518 06:32:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.518 06:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:33.518 ************************************ 00:07:33.518 START TEST app_cmdline 00:07:33.518 ************************************ 00:07:33.518 06:32:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:33.778 * Looking for test storage... 00:07:33.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:33.778 06:32:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:33.778 06:32:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:33.779 06:32:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:33.779 06:32:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:33.779 06:32:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:33.779 06:32:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:33.779 06:32:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:33.779 06:32:29 -- scripts/common.sh@335 -- # IFS=.-: 00:07:33.779 06:32:29 -- scripts/common.sh@335 -- # read -ra ver1 00:07:33.779 06:32:29 -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.779 06:32:29 -- scripts/common.sh@336 -- # read -ra ver2 00:07:33.779 06:32:29 -- scripts/common.sh@337 -- # local 'op=<' 00:07:33.779 06:32:29 -- scripts/common.sh@339 -- # ver1_l=2 00:07:33.779 06:32:29 -- scripts/common.sh@340 -- # ver2_l=1 00:07:33.779 06:32:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:33.779 06:32:29 -- scripts/common.sh@343 -- # case "$op" in 00:07:33.779 06:32:29 -- scripts/common.sh@344 -- # : 1 00:07:33.779 06:32:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:33.779 06:32:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.779 06:32:29 -- scripts/common.sh@364 -- # decimal 1 00:07:33.779 06:32:29 -- scripts/common.sh@352 -- # local d=1 00:07:33.779 06:32:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.779 06:32:29 -- scripts/common.sh@354 -- # echo 1 00:07:33.779 06:32:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:33.779 06:32:29 -- scripts/common.sh@365 -- # decimal 2 00:07:33.779 06:32:29 -- scripts/common.sh@352 -- # local d=2 00:07:33.779 06:32:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.779 06:32:29 -- scripts/common.sh@354 -- # echo 2 00:07:33.779 06:32:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:33.779 06:32:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:33.779 06:32:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:33.779 06:32:29 -- scripts/common.sh@367 -- # return 0 00:07:33.779 06:32:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.779 06:32:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:33.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.779 --rc genhtml_branch_coverage=1 00:07:33.779 --rc genhtml_function_coverage=1 00:07:33.779 --rc genhtml_legend=1 00:07:33.779 --rc geninfo_all_blocks=1 00:07:33.779 --rc geninfo_unexecuted_blocks=1 00:07:33.779 00:07:33.779 ' 00:07:33.779 06:32:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:33.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.779 --rc genhtml_branch_coverage=1 00:07:33.779 --rc genhtml_function_coverage=1 00:07:33.779 --rc genhtml_legend=1 00:07:33.779 --rc geninfo_all_blocks=1 00:07:33.779 --rc geninfo_unexecuted_blocks=1 00:07:33.779 00:07:33.779 ' 00:07:33.779 06:32:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:33.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.779 --rc genhtml_branch_coverage=1 00:07:33.779 --rc genhtml_function_coverage=1 00:07:33.779 --rc genhtml_legend=1 00:07:33.779 --rc geninfo_all_blocks=1 00:07:33.779 --rc geninfo_unexecuted_blocks=1 00:07:33.779 00:07:33.779 ' 00:07:33.779 06:32:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:33.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.779 --rc genhtml_branch_coverage=1 00:07:33.779 --rc genhtml_function_coverage=1 00:07:33.779 --rc genhtml_legend=1 00:07:33.779 --rc geninfo_all_blocks=1 00:07:33.779 --rc geninfo_unexecuted_blocks=1 00:07:33.779 00:07:33.779 ' 00:07:33.779 06:32:29 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:33.779 06:32:29 -- app/cmdline.sh@17 -- # spdk_tgt_pid=69074 00:07:33.779 06:32:29 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:33.779 06:32:29 -- app/cmdline.sh@18 -- # waitforlisten 69074 00:07:33.779 06:32:29 -- common/autotest_common.sh@829 -- # '[' -z 69074 ']' 00:07:33.779 06:32:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.779 06:32:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.779 06:32:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.779 06:32:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.779 06:32:29 -- common/autotest_common.sh@10 -- # set +x 00:07:33.779 [2024-12-05 06:32:29.149982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:33.779 [2024-12-05 06:32:29.150244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69074 ] 00:07:34.039 [2024-12-05 06:32:29.282223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.039 [2024-12-05 06:32:29.313063] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:34.039 [2024-12-05 06:32:29.313463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.977 06:32:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.977 06:32:30 -- common/autotest_common.sh@862 -- # return 0 00:07:34.977 06:32:30 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:34.977 { 00:07:34.977 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:34.977 "fields": { 00:07:34.977 "major": 24, 00:07:34.977 "minor": 1, 00:07:34.977 "patch": 1, 00:07:34.977 "suffix": "-pre", 00:07:34.977 "commit": "c13c99a5e" 00:07:34.977 } 00:07:34.977 } 00:07:34.977 06:32:30 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:34.977 06:32:30 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:34.977 06:32:30 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:34.977 06:32:30 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:34.977 06:32:30 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:34.977 06:32:30 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:34.977 06:32:30 -- app/cmdline.sh@26 -- # sort 00:07:34.977 06:32:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.977 06:32:30 -- common/autotest_common.sh@10 -- # set +x 00:07:34.977 06:32:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.977 06:32:30 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:34.977 06:32:30 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:34.977 06:32:30 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.977 06:32:30 -- common/autotest_common.sh@650 -- # local es=0 00:07:34.977 06:32:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.977 06:32:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.977 06:32:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.977 06:32:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.977 06:32:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.977 06:32:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.977 06:32:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.977 06:32:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.977 06:32:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:34.977 06:32:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:35.236 request: 00:07:35.236 { 00:07:35.236 "method": "env_dpdk_get_mem_stats", 00:07:35.236 "req_id": 1 00:07:35.236 } 00:07:35.236 Got JSON-RPC error response 00:07:35.236 response: 00:07:35.236 { 00:07:35.236 "code": -32601, 00:07:35.236 "message": "Method not found" 00:07:35.236 } 00:07:35.236 06:32:30 -- common/autotest_common.sh@653 -- # es=1 00:07:35.236 06:32:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:35.236 06:32:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:35.236 06:32:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:35.236 06:32:30 -- app/cmdline.sh@1 -- # killprocess 69074 00:07:35.236 06:32:30 -- common/autotest_common.sh@936 -- # '[' -z 69074 ']' 00:07:35.236 06:32:30 -- common/autotest_common.sh@940 -- # kill -0 69074 00:07:35.236 06:32:30 -- common/autotest_common.sh@941 -- # uname 00:07:35.236 06:32:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:35.236 06:32:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69074 00:07:35.496 killing process with pid 69074 00:07:35.496 06:32:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:35.496 06:32:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:35.496 06:32:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69074' 00:07:35.496 06:32:30 -- common/autotest_common.sh@955 -- # kill 69074 00:07:35.496 06:32:30 -- common/autotest_common.sh@960 -- # wait 69074 00:07:35.496 ************************************ 00:07:35.496 END TEST app_cmdline 00:07:35.496 ************************************ 00:07:35.496 00:07:35.496 real 0m2.006s 00:07:35.496 user 0m2.643s 00:07:35.496 sys 0m0.349s 00:07:35.496 06:32:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.496 06:32:30 -- common/autotest_common.sh@10 -- # set +x 00:07:35.755 06:32:30 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:35.755 06:32:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:35.755 06:32:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.755 06:32:30 -- common/autotest_common.sh@10 -- # set +x 00:07:35.755 ************************************ 00:07:35.755 START TEST version 00:07:35.755 ************************************ 00:07:35.755 06:32:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:35.755 * Looking for test storage... 00:07:35.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:35.755 06:32:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:35.755 06:32:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:35.755 06:32:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:35.755 06:32:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:35.755 06:32:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:35.755 06:32:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:35.755 06:32:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:35.755 06:32:31 -- scripts/common.sh@335 -- # IFS=.-: 00:07:35.755 06:32:31 -- scripts/common.sh@335 -- # read -ra ver1 00:07:35.755 06:32:31 -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.755 06:32:31 -- scripts/common.sh@336 -- # read -ra ver2 00:07:35.755 06:32:31 -- scripts/common.sh@337 -- # local 'op=<' 00:07:35.755 06:32:31 -- scripts/common.sh@339 -- # ver1_l=2 00:07:35.755 06:32:31 -- scripts/common.sh@340 -- # ver2_l=1 00:07:35.755 06:32:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:35.755 06:32:31 -- scripts/common.sh@343 -- # case "$op" in 00:07:35.755 06:32:31 -- scripts/common.sh@344 -- # : 1 00:07:35.755 06:32:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:35.755 06:32:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.756 06:32:31 -- scripts/common.sh@364 -- # decimal 1 00:07:35.756 06:32:31 -- scripts/common.sh@352 -- # local d=1 00:07:35.756 06:32:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.756 06:32:31 -- scripts/common.sh@354 -- # echo 1 00:07:35.756 06:32:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:35.756 06:32:31 -- scripts/common.sh@365 -- # decimal 2 00:07:35.756 06:32:31 -- scripts/common.sh@352 -- # local d=2 00:07:35.756 06:32:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.756 06:32:31 -- scripts/common.sh@354 -- # echo 2 00:07:35.756 06:32:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:35.756 06:32:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:35.756 06:32:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:35.756 06:32:31 -- scripts/common.sh@367 -- # return 0 00:07:35.756 06:32:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.756 06:32:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:35.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.756 --rc genhtml_branch_coverage=1 00:07:35.756 --rc genhtml_function_coverage=1 00:07:35.756 --rc genhtml_legend=1 00:07:35.756 --rc geninfo_all_blocks=1 00:07:35.756 --rc geninfo_unexecuted_blocks=1 00:07:35.756 00:07:35.756 ' 00:07:35.756 06:32:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:35.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.756 --rc genhtml_branch_coverage=1 00:07:35.756 --rc genhtml_function_coverage=1 00:07:35.756 --rc genhtml_legend=1 00:07:35.756 --rc geninfo_all_blocks=1 00:07:35.756 --rc geninfo_unexecuted_blocks=1 00:07:35.756 00:07:35.756 ' 00:07:35.756 06:32:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:35.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.756 --rc genhtml_branch_coverage=1 00:07:35.756 --rc genhtml_function_coverage=1 00:07:35.756 --rc genhtml_legend=1 00:07:35.756 --rc geninfo_all_blocks=1 00:07:35.756 --rc geninfo_unexecuted_blocks=1 00:07:35.756 00:07:35.756 ' 00:07:35.756 06:32:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:35.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.756 --rc genhtml_branch_coverage=1 00:07:35.756 --rc genhtml_function_coverage=1 00:07:35.756 --rc genhtml_legend=1 00:07:35.756 --rc geninfo_all_blocks=1 00:07:35.756 --rc geninfo_unexecuted_blocks=1 00:07:35.756 00:07:35.756 ' 00:07:35.756 06:32:31 -- app/version.sh@17 -- # get_header_version major 00:07:35.756 06:32:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.756 06:32:31 -- app/version.sh@14 -- # cut -f2 00:07:35.756 06:32:31 -- app/version.sh@14 -- # tr -d '"' 00:07:35.756 06:32:31 -- app/version.sh@17 -- # major=24 00:07:35.756 06:32:31 -- app/version.sh@18 -- # get_header_version minor 00:07:35.756 06:32:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.756 06:32:31 -- app/version.sh@14 -- # cut -f2 00:07:35.756 06:32:31 -- app/version.sh@14 -- # tr -d '"' 00:07:35.756 06:32:31 -- app/version.sh@18 -- # minor=1 00:07:35.756 06:32:31 -- app/version.sh@19 -- # get_header_version patch 00:07:35.756 06:32:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.756 06:32:31 -- app/version.sh@14 -- # cut -f2 00:07:35.756 06:32:31 -- app/version.sh@14 -- # tr -d '"' 00:07:35.756 06:32:31 -- app/version.sh@19 -- # patch=1 00:07:35.756 06:32:31 -- app/version.sh@20 -- # get_header_version suffix 00:07:35.756 06:32:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.756 06:32:31 -- app/version.sh@14 -- # cut -f2 00:07:35.756 06:32:31 -- app/version.sh@14 -- # tr -d '"' 00:07:35.756 06:32:31 -- app/version.sh@20 -- # suffix=-pre 00:07:35.756 06:32:31 -- app/version.sh@22 -- # version=24.1 00:07:35.756 06:32:31 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:35.756 06:32:31 -- app/version.sh@25 -- # version=24.1.1 00:07:35.756 06:32:31 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:35.756 06:32:31 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:35.756 06:32:31 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:35.756 06:32:31 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:35.756 06:32:31 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:35.756 ************************************ 00:07:35.756 END TEST version 00:07:35.756 ************************************ 00:07:35.756 00:07:35.756 real 0m0.226s 00:07:35.756 user 0m0.145s 00:07:35.756 sys 0m0.117s 00:07:35.756 06:32:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.756 06:32:31 -- common/autotest_common.sh@10 -- # set +x 00:07:36.015 06:32:31 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:36.015 06:32:31 -- spdk/autotest.sh@191 -- # uname -s 00:07:36.015 06:32:31 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:36.015 06:32:31 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:36.015 06:32:31 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:07:36.015 06:32:31 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:07:36.016 06:32:31 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:36.016 06:32:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:36.016 06:32:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.016 06:32:31 -- common/autotest_common.sh@10 -- # set +x 00:07:36.016 ************************************ 00:07:36.016 START TEST spdk_dd 00:07:36.016 ************************************ 00:07:36.016 06:32:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:36.016 * Looking for test storage... 00:07:36.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:36.016 06:32:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:36.016 06:32:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:36.016 06:32:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:36.016 06:32:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:36.016 06:32:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:36.016 06:32:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:36.016 06:32:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:36.016 06:32:31 -- scripts/common.sh@335 -- # IFS=.-: 00:07:36.016 06:32:31 -- scripts/common.sh@335 -- # read -ra ver1 00:07:36.016 06:32:31 -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.016 06:32:31 -- scripts/common.sh@336 -- # read -ra ver2 00:07:36.016 06:32:31 -- scripts/common.sh@337 -- # local 'op=<' 00:07:36.016 06:32:31 -- scripts/common.sh@339 -- # ver1_l=2 00:07:36.016 06:32:31 -- scripts/common.sh@340 -- # ver2_l=1 00:07:36.016 06:32:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:36.016 06:32:31 -- scripts/common.sh@343 -- # case "$op" in 00:07:36.016 06:32:31 -- scripts/common.sh@344 -- # : 1 00:07:36.016 06:32:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:36.016 06:32:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.016 06:32:31 -- scripts/common.sh@364 -- # decimal 1 00:07:36.016 06:32:31 -- scripts/common.sh@352 -- # local d=1 00:07:36.016 06:32:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.016 06:32:31 -- scripts/common.sh@354 -- # echo 1 00:07:36.016 06:32:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:36.016 06:32:31 -- scripts/common.sh@365 -- # decimal 2 00:07:36.016 06:32:31 -- scripts/common.sh@352 -- # local d=2 00:07:36.016 06:32:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.016 06:32:31 -- scripts/common.sh@354 -- # echo 2 00:07:36.016 06:32:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:36.016 06:32:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:36.016 06:32:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:36.016 06:32:31 -- scripts/common.sh@367 -- # return 0 00:07:36.016 06:32:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.016 06:32:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:36.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.016 --rc genhtml_branch_coverage=1 00:07:36.016 --rc genhtml_function_coverage=1 00:07:36.016 --rc genhtml_legend=1 00:07:36.016 --rc geninfo_all_blocks=1 00:07:36.016 --rc geninfo_unexecuted_blocks=1 00:07:36.016 00:07:36.016 ' 00:07:36.016 06:32:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:36.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.016 --rc genhtml_branch_coverage=1 00:07:36.016 --rc genhtml_function_coverage=1 00:07:36.016 --rc genhtml_legend=1 00:07:36.016 --rc geninfo_all_blocks=1 00:07:36.016 --rc geninfo_unexecuted_blocks=1 00:07:36.016 00:07:36.016 ' 00:07:36.016 06:32:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:36.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.016 --rc genhtml_branch_coverage=1 00:07:36.016 --rc genhtml_function_coverage=1 00:07:36.016 --rc genhtml_legend=1 00:07:36.016 --rc geninfo_all_blocks=1 00:07:36.016 --rc geninfo_unexecuted_blocks=1 00:07:36.016 00:07:36.016 ' 00:07:36.016 06:32:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:36.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.016 --rc genhtml_branch_coverage=1 00:07:36.016 --rc genhtml_function_coverage=1 00:07:36.016 --rc genhtml_legend=1 00:07:36.016 --rc geninfo_all_blocks=1 00:07:36.016 --rc geninfo_unexecuted_blocks=1 00:07:36.016 00:07:36.016 ' 00:07:36.016 06:32:31 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.016 06:32:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.016 06:32:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.016 06:32:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.016 06:32:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.016 06:32:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.016 06:32:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.016 06:32:31 -- paths/export.sh@5 -- # export PATH 00:07:36.016 06:32:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.016 06:32:31 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:36.309 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:36.575 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:36.575 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:36.575 06:32:31 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:36.575 06:32:31 -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:36.576 06:32:31 -- scripts/common.sh@311 -- # local bdf bdfs 00:07:36.576 06:32:31 -- scripts/common.sh@312 -- # local nvmes 00:07:36.576 06:32:31 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:07:36.576 06:32:31 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:36.576 06:32:31 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:07:36.576 06:32:31 -- scripts/common.sh@297 -- # local bdf= 00:07:36.576 06:32:31 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:07:36.576 06:32:31 -- scripts/common.sh@232 -- # local class 00:07:36.576 06:32:31 -- scripts/common.sh@233 -- # local subclass 00:07:36.576 06:32:31 -- scripts/common.sh@234 -- # local progif 00:07:36.576 06:32:31 -- scripts/common.sh@235 -- # printf %02x 1 00:07:36.576 06:32:31 -- scripts/common.sh@235 -- # class=01 00:07:36.576 06:32:31 -- scripts/common.sh@236 -- # printf %02x 8 00:07:36.576 06:32:31 -- scripts/common.sh@236 -- # subclass=08 00:07:36.576 06:32:31 -- scripts/common.sh@237 -- # printf %02x 2 00:07:36.576 06:32:31 -- scripts/common.sh@237 -- # progif=02 00:07:36.576 06:32:31 -- scripts/common.sh@239 -- # hash lspci 00:07:36.576 06:32:31 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:07:36.576 06:32:31 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:07:36.576 06:32:31 -- scripts/common.sh@242 -- # grep -i -- -p02 00:07:36.576 06:32:31 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:36.576 06:32:31 -- scripts/common.sh@244 -- # tr -d '"' 00:07:36.576 06:32:31 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:36.576 06:32:31 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:07:36.576 06:32:31 -- scripts/common.sh@15 -- # local i 00:07:36.576 06:32:31 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:07:36.576 06:32:31 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:36.576 06:32:31 -- scripts/common.sh@24 -- # return 0 00:07:36.576 06:32:31 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:07:36.576 06:32:31 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:36.576 06:32:31 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:07:36.576 06:32:31 -- scripts/common.sh@15 -- # local i 00:07:36.576 06:32:31 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:07:36.576 06:32:31 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:36.576 06:32:31 -- scripts/common.sh@24 -- # return 0 00:07:36.576 06:32:31 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:07:36.576 06:32:31 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:36.576 06:32:31 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:07:36.576 06:32:31 -- scripts/common.sh@322 -- # uname -s 00:07:36.576 06:32:31 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:36.576 06:32:31 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:36.576 06:32:31 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:36.576 06:32:31 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:07:36.576 06:32:31 -- scripts/common.sh@322 -- # uname -s 00:07:36.576 06:32:31 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:36.576 06:32:31 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:36.576 06:32:31 -- scripts/common.sh@327 -- # (( 2 )) 00:07:36.576 06:32:31 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:07:36.576 06:32:31 -- dd/dd.sh@13 -- # check_liburing 00:07:36.576 06:32:31 -- dd/common.sh@139 -- # local lib so 00:07:36.576 06:32:31 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:07:36.576 06:32:31 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:07:36.576 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.576 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:07:36.577 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.577 06:32:31 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:07:36.578 06:32:31 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:36.578 06:32:31 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:36.578 06:32:31 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:36.578 * spdk_dd linked to liburing 00:07:36.578 06:32:31 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:36.578 06:32:31 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:36.578 06:32:31 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:36.578 06:32:31 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:36.578 06:32:31 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:36.578 06:32:31 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:36.578 06:32:31 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:36.578 06:32:31 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:36.578 06:32:31 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:36.578 06:32:31 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:36.578 06:32:31 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:36.578 06:32:31 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:36.578 06:32:31 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:36.578 06:32:31 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:36.578 06:32:31 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:36.578 06:32:31 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:36.578 06:32:31 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:36.578 06:32:31 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:36.578 06:32:31 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:36.578 06:32:31 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:36.578 06:32:31 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:36.578 06:32:31 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:36.578 06:32:31 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:36.578 06:32:31 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:36.578 06:32:31 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:36.578 06:32:31 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:36.578 06:32:31 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:36.578 06:32:31 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:36.578 06:32:31 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:36.578 06:32:31 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:36.578 06:32:31 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:36.578 06:32:31 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:36.578 06:32:31 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:36.578 06:32:31 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:36.578 06:32:31 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:36.578 06:32:31 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:36.578 06:32:31 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:36.578 06:32:31 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:36.578 06:32:31 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:36.578 06:32:31 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:36.578 06:32:31 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:36.578 06:32:31 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:36.578 06:32:31 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:36.578 06:32:31 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:36.578 06:32:31 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:36.578 06:32:31 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:36.578 06:32:31 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:36.578 06:32:31 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:36.578 06:32:31 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:36.578 06:32:31 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:36.578 06:32:31 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:36.578 06:32:31 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:36.578 06:32:31 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:36.578 06:32:31 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:36.578 06:32:31 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:07:36.578 06:32:31 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:36.578 06:32:31 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:36.578 06:32:31 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:36.578 06:32:31 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:36.578 06:32:31 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:36.578 06:32:31 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:36.578 06:32:31 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:36.578 06:32:31 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:36.578 06:32:31 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:36.578 06:32:31 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:36.578 06:32:31 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:36.578 06:32:31 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:36.578 06:32:31 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:36.578 06:32:31 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:36.578 06:32:31 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:36.578 06:32:31 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:36.578 06:32:31 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:36.578 06:32:31 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:36.578 06:32:31 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:36.578 06:32:31 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:36.578 06:32:31 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:36.578 06:32:31 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:36.578 06:32:31 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:36.578 06:32:31 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:36.578 06:32:31 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:36.578 06:32:31 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:07:36.578 06:32:31 -- dd/common.sh@149 -- # [[ y != y ]] 00:07:36.578 06:32:31 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:07:36.578 06:32:31 -- dd/common.sh@156 -- # export liburing_in_use=1 00:07:36.578 06:32:31 -- dd/common.sh@156 -- # liburing_in_use=1 00:07:36.578 06:32:31 -- dd/common.sh@157 -- # return 0 00:07:36.578 06:32:31 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:36.578 06:32:31 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:36.578 06:32:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:36.578 06:32:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.578 06:32:31 -- common/autotest_common.sh@10 -- # set +x 00:07:36.578 ************************************ 00:07:36.578 START TEST spdk_dd_basic_rw 00:07:36.578 ************************************ 00:07:36.578 06:32:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:36.578 * Looking for test storage... 00:07:36.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:36.578 06:32:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:36.578 06:32:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:36.578 06:32:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:36.845 06:32:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:36.845 06:32:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:36.845 06:32:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:36.845 06:32:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:36.845 06:32:32 -- scripts/common.sh@335 -- # IFS=.-: 00:07:36.845 06:32:32 -- scripts/common.sh@335 -- # read -ra ver1 00:07:36.845 06:32:32 -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.845 06:32:32 -- scripts/common.sh@336 -- # read -ra ver2 00:07:36.845 06:32:32 -- scripts/common.sh@337 -- # local 'op=<' 00:07:36.845 06:32:32 -- scripts/common.sh@339 -- # ver1_l=2 00:07:36.845 06:32:32 -- scripts/common.sh@340 -- # ver2_l=1 00:07:36.845 06:32:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:36.845 06:32:32 -- scripts/common.sh@343 -- # case "$op" in 00:07:36.845 06:32:32 -- scripts/common.sh@344 -- # : 1 00:07:36.845 06:32:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:36.845 06:32:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.845 06:32:32 -- scripts/common.sh@364 -- # decimal 1 00:07:36.845 06:32:32 -- scripts/common.sh@352 -- # local d=1 00:07:36.845 06:32:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.845 06:32:32 -- scripts/common.sh@354 -- # echo 1 00:07:36.845 06:32:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:36.845 06:32:32 -- scripts/common.sh@365 -- # decimal 2 00:07:36.845 06:32:32 -- scripts/common.sh@352 -- # local d=2 00:07:36.845 06:32:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.845 06:32:32 -- scripts/common.sh@354 -- # echo 2 00:07:36.845 06:32:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:36.845 06:32:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:36.845 06:32:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:36.845 06:32:32 -- scripts/common.sh@367 -- # return 0 00:07:36.845 06:32:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.845 06:32:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:36.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.846 --rc genhtml_branch_coverage=1 00:07:36.846 --rc genhtml_function_coverage=1 00:07:36.846 --rc genhtml_legend=1 00:07:36.846 --rc geninfo_all_blocks=1 00:07:36.846 --rc geninfo_unexecuted_blocks=1 00:07:36.846 00:07:36.846 ' 00:07:36.846 06:32:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:36.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.846 --rc genhtml_branch_coverage=1 00:07:36.846 --rc genhtml_function_coverage=1 00:07:36.846 --rc genhtml_legend=1 00:07:36.846 --rc geninfo_all_blocks=1 00:07:36.846 --rc geninfo_unexecuted_blocks=1 00:07:36.846 00:07:36.846 ' 00:07:36.846 06:32:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:36.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.846 --rc genhtml_branch_coverage=1 00:07:36.846 --rc genhtml_function_coverage=1 00:07:36.846 --rc genhtml_legend=1 00:07:36.846 --rc geninfo_all_blocks=1 00:07:36.846 --rc geninfo_unexecuted_blocks=1 00:07:36.846 00:07:36.846 ' 00:07:36.846 06:32:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:36.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.846 --rc genhtml_branch_coverage=1 00:07:36.846 --rc genhtml_function_coverage=1 00:07:36.846 --rc genhtml_legend=1 00:07:36.846 --rc geninfo_all_blocks=1 00:07:36.846 --rc geninfo_unexecuted_blocks=1 00:07:36.846 00:07:36.846 ' 00:07:36.846 06:32:32 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.846 06:32:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.846 06:32:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.846 06:32:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.846 06:32:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.846 06:32:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.846 06:32:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.846 06:32:32 -- paths/export.sh@5 -- # export PATH 00:07:36.846 06:32:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.846 06:32:32 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:36.846 06:32:32 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:36.846 06:32:32 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:36.846 06:32:32 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:07:36.846 06:32:32 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:36.846 06:32:32 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:36.846 06:32:32 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:36.846 06:32:32 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:36.846 06:32:32 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.846 06:32:32 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:07:36.846 06:32:32 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:07:36.846 06:32:32 -- dd/common.sh@126 -- # mapfile -t id 00:07:36.846 06:32:32 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:07:37.108 06:32:32 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 94 Data Units Written: 9 Host Read Commands: 2147 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:37.108 06:32:32 -- dd/common.sh@130 -- # lbaf=04 00:07:37.108 06:32:32 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 94 Data Units Written: 9 Host Read Commands: 2147 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:37.108 06:32:32 -- dd/common.sh@132 -- # lbaf=4096 00:07:37.108 06:32:32 -- dd/common.sh@134 -- # echo 4096 00:07:37.108 06:32:32 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:37.108 06:32:32 -- dd/basic_rw.sh@96 -- # : 00:07:37.108 06:32:32 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:37.108 06:32:32 -- dd/basic_rw.sh@96 -- # gen_conf 00:07:37.108 06:32:32 -- dd/common.sh@31 -- # xtrace_disable 00:07:37.108 06:32:32 -- common/autotest_common.sh@10 -- # set +x 00:07:37.108 06:32:32 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:37.109 06:32:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.109 06:32:32 -- common/autotest_common.sh@10 -- # set +x 00:07:37.109 ************************************ 00:07:37.109 START TEST dd_bs_lt_native_bs 00:07:37.109 ************************************ 00:07:37.109 06:32:32 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:37.109 06:32:32 -- common/autotest_common.sh@650 -- # local es=0 00:07:37.109 06:32:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:37.109 06:32:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.109 06:32:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.109 06:32:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.109 06:32:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.109 06:32:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.109 06:32:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.109 06:32:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.109 06:32:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:37.109 06:32:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:37.109 { 00:07:37.109 "subsystems": [ 00:07:37.109 { 00:07:37.109 "subsystem": "bdev", 00:07:37.109 "config": [ 00:07:37.109 { 00:07:37.109 "params": { 00:07:37.109 "trtype": "pcie", 00:07:37.109 "traddr": "0000:00:06.0", 00:07:37.109 "name": "Nvme0" 00:07:37.109 }, 00:07:37.109 "method": "bdev_nvme_attach_controller" 00:07:37.109 }, 00:07:37.109 { 00:07:37.109 "method": "bdev_wait_for_examine" 00:07:37.109 } 00:07:37.109 ] 00:07:37.109 } 00:07:37.109 ] 00:07:37.109 } 00:07:37.109 [2024-12-05 06:32:32.395585] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:37.109 [2024-12-05 06:32:32.395689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69422 ] 00:07:37.109 [2024-12-05 06:32:32.535064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.368 [2024-12-05 06:32:32.578393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.368 [2024-12-05 06:32:32.695674] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:37.368 [2024-12-05 06:32:32.695742] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.368 [2024-12-05 06:32:32.761177] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:37.368 06:32:32 -- common/autotest_common.sh@653 -- # es=234 00:07:37.368 06:32:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.368 06:32:32 -- common/autotest_common.sh@662 -- # es=106 00:07:37.368 ************************************ 00:07:37.368 END TEST dd_bs_lt_native_bs 00:07:37.368 ************************************ 00:07:37.368 06:32:32 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:37.368 06:32:32 -- common/autotest_common.sh@670 -- # es=1 00:07:37.368 06:32:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.368 00:07:37.368 real 0m0.476s 00:07:37.368 user 0m0.316s 00:07:37.368 sys 0m0.114s 00:07:37.368 06:32:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.368 06:32:32 -- common/autotest_common.sh@10 -- # set +x 00:07:37.628 06:32:32 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:37.628 06:32:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:37.628 06:32:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.628 06:32:32 -- common/autotest_common.sh@10 -- # set +x 00:07:37.628 ************************************ 00:07:37.628 START TEST dd_rw 00:07:37.628 ************************************ 00:07:37.628 06:32:32 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:07:37.628 06:32:32 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:37.628 06:32:32 -- dd/basic_rw.sh@12 -- # local count size 00:07:37.628 06:32:32 -- dd/basic_rw.sh@13 -- # local qds bss 00:07:37.628 06:32:32 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:37.628 06:32:32 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:37.628 06:32:32 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:37.628 06:32:32 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:37.628 06:32:32 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:37.628 06:32:32 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:37.628 06:32:32 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:37.628 06:32:32 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:37.628 06:32:32 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:37.628 06:32:32 -- dd/basic_rw.sh@23 -- # count=15 00:07:37.628 06:32:32 -- dd/basic_rw.sh@24 -- # count=15 00:07:37.628 06:32:32 -- dd/basic_rw.sh@25 -- # size=61440 00:07:37.628 06:32:32 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:37.628 06:32:32 -- dd/common.sh@98 -- # xtrace_disable 00:07:37.628 06:32:32 -- common/autotest_common.sh@10 -- # set +x 00:07:38.196 06:32:33 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:38.196 06:32:33 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:38.196 06:32:33 -- dd/common.sh@31 -- # xtrace_disable 00:07:38.196 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:07:38.196 [2024-12-05 06:32:33.516913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:38.196 [2024-12-05 06:32:33.517192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69454 ] 00:07:38.196 { 00:07:38.196 "subsystems": [ 00:07:38.196 { 00:07:38.196 "subsystem": "bdev", 00:07:38.196 "config": [ 00:07:38.196 { 00:07:38.196 "params": { 00:07:38.196 "trtype": "pcie", 00:07:38.196 "traddr": "0000:00:06.0", 00:07:38.196 "name": "Nvme0" 00:07:38.196 }, 00:07:38.196 "method": "bdev_nvme_attach_controller" 00:07:38.196 }, 00:07:38.196 { 00:07:38.196 "method": "bdev_wait_for_examine" 00:07:38.196 } 00:07:38.196 ] 00:07:38.196 } 00:07:38.196 ] 00:07:38.196 } 00:07:38.196 [2024-12-05 06:32:33.655030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.455 [2024-12-05 06:32:33.688606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.455  [2024-12-05T06:32:34.181Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:38.715 00:07:38.715 06:32:33 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:38.715 06:32:33 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:38.715 06:32:33 -- dd/common.sh@31 -- # xtrace_disable 00:07:38.715 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:07:38.715 [2024-12-05 06:32:33.992741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:38.715 [2024-12-05 06:32:33.993021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69467 ] 00:07:38.715 { 00:07:38.715 "subsystems": [ 00:07:38.715 { 00:07:38.715 "subsystem": "bdev", 00:07:38.715 "config": [ 00:07:38.715 { 00:07:38.715 "params": { 00:07:38.715 "trtype": "pcie", 00:07:38.715 "traddr": "0000:00:06.0", 00:07:38.715 "name": "Nvme0" 00:07:38.715 }, 00:07:38.715 "method": "bdev_nvme_attach_controller" 00:07:38.715 }, 00:07:38.715 { 00:07:38.715 "method": "bdev_wait_for_examine" 00:07:38.715 } 00:07:38.715 ] 00:07:38.715 } 00:07:38.715 ] 00:07:38.715 } 00:07:38.715 [2024-12-05 06:32:34.125608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.715 [2024-12-05 06:32:34.155362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.975  [2024-12-05T06:32:34.441Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:38.975 00:07:38.975 06:32:34 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.975 06:32:34 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:38.975 06:32:34 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:38.975 06:32:34 -- dd/common.sh@11 -- # local nvme_ref= 00:07:38.975 06:32:34 -- dd/common.sh@12 -- # local size=61440 00:07:38.975 06:32:34 -- dd/common.sh@14 -- # local bs=1048576 00:07:38.975 06:32:34 -- dd/common.sh@15 -- # local count=1 00:07:38.975 06:32:34 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:38.975 06:32:34 -- dd/common.sh@18 -- # gen_conf 00:07:38.975 06:32:34 -- dd/common.sh@31 -- # xtrace_disable 00:07:38.975 06:32:34 -- common/autotest_common.sh@10 -- # set +x 00:07:39.235 [2024-12-05 06:32:34.467796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:39.235 [2024-12-05 06:32:34.468063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69475 ] 00:07:39.235 { 00:07:39.235 "subsystems": [ 00:07:39.235 { 00:07:39.235 "subsystem": "bdev", 00:07:39.235 "config": [ 00:07:39.235 { 00:07:39.235 "params": { 00:07:39.235 "trtype": "pcie", 00:07:39.235 "traddr": "0000:00:06.0", 00:07:39.235 "name": "Nvme0" 00:07:39.235 }, 00:07:39.235 "method": "bdev_nvme_attach_controller" 00:07:39.235 }, 00:07:39.235 { 00:07:39.235 "method": "bdev_wait_for_examine" 00:07:39.235 } 00:07:39.235 ] 00:07:39.235 } 00:07:39.235 ] 00:07:39.235 } 00:07:39.235 [2024-12-05 06:32:34.603716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.235 [2024-12-05 06:32:34.633742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.494  [2024-12-05T06:32:34.960Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:39.494 00:07:39.494 06:32:34 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:39.494 06:32:34 -- dd/basic_rw.sh@23 -- # count=15 00:07:39.494 06:32:34 -- dd/basic_rw.sh@24 -- # count=15 00:07:39.494 06:32:34 -- dd/basic_rw.sh@25 -- # size=61440 00:07:39.494 06:32:34 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:39.494 06:32:34 -- dd/common.sh@98 -- # xtrace_disable 00:07:39.494 06:32:34 -- common/autotest_common.sh@10 -- # set +x 00:07:40.060 06:32:35 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:40.060 06:32:35 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:40.060 06:32:35 -- dd/common.sh@31 -- # xtrace_disable 00:07:40.060 06:32:35 -- common/autotest_common.sh@10 -- # set +x 00:07:40.060 [2024-12-05 06:32:35.492680] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:40.060 [2024-12-05 06:32:35.492989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69493 ] 00:07:40.060 { 00:07:40.060 "subsystems": [ 00:07:40.060 { 00:07:40.060 "subsystem": "bdev", 00:07:40.060 "config": [ 00:07:40.060 { 00:07:40.060 "params": { 00:07:40.060 "trtype": "pcie", 00:07:40.060 "traddr": "0000:00:06.0", 00:07:40.060 "name": "Nvme0" 00:07:40.060 }, 00:07:40.060 "method": "bdev_nvme_attach_controller" 00:07:40.060 }, 00:07:40.060 { 00:07:40.060 "method": "bdev_wait_for_examine" 00:07:40.060 } 00:07:40.060 ] 00:07:40.060 } 00:07:40.060 ] 00:07:40.060 } 00:07:40.318 [2024-12-05 06:32:35.629369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.318 [2024-12-05 06:32:35.659127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.318  [2024-12-05T06:32:36.042Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:40.576 00:07:40.576 06:32:35 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:40.576 06:32:35 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:40.576 06:32:35 -- dd/common.sh@31 -- # xtrace_disable 00:07:40.576 06:32:35 -- common/autotest_common.sh@10 -- # set +x 00:07:40.576 [2024-12-05 06:32:35.959696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:40.576 [2024-12-05 06:32:35.959793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69511 ] 00:07:40.576 { 00:07:40.576 "subsystems": [ 00:07:40.576 { 00:07:40.576 "subsystem": "bdev", 00:07:40.576 "config": [ 00:07:40.576 { 00:07:40.576 "params": { 00:07:40.576 "trtype": "pcie", 00:07:40.576 "traddr": "0000:00:06.0", 00:07:40.576 "name": "Nvme0" 00:07:40.576 }, 00:07:40.576 "method": "bdev_nvme_attach_controller" 00:07:40.576 }, 00:07:40.576 { 00:07:40.576 "method": "bdev_wait_for_examine" 00:07:40.576 } 00:07:40.576 ] 00:07:40.576 } 00:07:40.576 ] 00:07:40.576 } 00:07:40.837 [2024-12-05 06:32:36.094908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.837 [2024-12-05 06:32:36.124116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.837  [2024-12-05T06:32:36.560Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:41.094 00:07:41.094 06:32:36 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.094 06:32:36 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:41.094 06:32:36 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:41.094 06:32:36 -- dd/common.sh@11 -- # local nvme_ref= 00:07:41.094 06:32:36 -- dd/common.sh@12 -- # local size=61440 00:07:41.094 06:32:36 -- dd/common.sh@14 -- # local bs=1048576 00:07:41.094 06:32:36 -- dd/common.sh@15 -- # local count=1 00:07:41.094 06:32:36 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:41.095 06:32:36 -- dd/common.sh@18 -- # gen_conf 00:07:41.095 06:32:36 -- dd/common.sh@31 -- # xtrace_disable 00:07:41.095 06:32:36 -- common/autotest_common.sh@10 -- # set +x 00:07:41.095 [2024-12-05 06:32:36.425545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.095 [2024-12-05 06:32:36.425635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69519 ] 00:07:41.095 { 00:07:41.095 "subsystems": [ 00:07:41.095 { 00:07:41.095 "subsystem": "bdev", 00:07:41.095 "config": [ 00:07:41.095 { 00:07:41.095 "params": { 00:07:41.095 "trtype": "pcie", 00:07:41.095 "traddr": "0000:00:06.0", 00:07:41.095 "name": "Nvme0" 00:07:41.095 }, 00:07:41.095 "method": "bdev_nvme_attach_controller" 00:07:41.095 }, 00:07:41.095 { 00:07:41.095 "method": "bdev_wait_for_examine" 00:07:41.095 } 00:07:41.095 ] 00:07:41.095 } 00:07:41.095 ] 00:07:41.095 } 00:07:41.095 [2024-12-05 06:32:36.559280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.352 [2024-12-05 06:32:36.591310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.352  [2024-12-05T06:32:37.075Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:41.609 00:07:41.609 06:32:36 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:41.609 06:32:36 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:41.609 06:32:36 -- dd/basic_rw.sh@23 -- # count=7 00:07:41.609 06:32:36 -- dd/basic_rw.sh@24 -- # count=7 00:07:41.609 06:32:36 -- dd/basic_rw.sh@25 -- # size=57344 00:07:41.609 06:32:36 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:41.609 06:32:36 -- dd/common.sh@98 -- # xtrace_disable 00:07:41.609 06:32:36 -- common/autotest_common.sh@10 -- # set +x 00:07:42.175 06:32:37 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:42.175 06:32:37 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:42.175 06:32:37 -- dd/common.sh@31 -- # xtrace_disable 00:07:42.175 06:32:37 -- common/autotest_common.sh@10 -- # set +x 00:07:42.175 [2024-12-05 06:32:37.393811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:42.175 [2024-12-05 06:32:37.393927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69537 ] 00:07:42.175 { 00:07:42.175 "subsystems": [ 00:07:42.175 { 00:07:42.175 "subsystem": "bdev", 00:07:42.175 "config": [ 00:07:42.175 { 00:07:42.175 "params": { 00:07:42.175 "trtype": "pcie", 00:07:42.175 "traddr": "0000:00:06.0", 00:07:42.175 "name": "Nvme0" 00:07:42.175 }, 00:07:42.175 "method": "bdev_nvme_attach_controller" 00:07:42.175 }, 00:07:42.175 { 00:07:42.175 "method": "bdev_wait_for_examine" 00:07:42.175 } 00:07:42.175 ] 00:07:42.175 } 00:07:42.175 ] 00:07:42.175 } 00:07:42.175 [2024-12-05 06:32:37.531163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.175 [2024-12-05 06:32:37.560510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.434  [2024-12-05T06:32:37.900Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:42.434 00:07:42.434 06:32:37 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:42.434 06:32:37 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:42.434 06:32:37 -- dd/common.sh@31 -- # xtrace_disable 00:07:42.434 06:32:37 -- common/autotest_common.sh@10 -- # set +x 00:07:42.434 [2024-12-05 06:32:37.845783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:42.434 [2024-12-05 06:32:37.845886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69555 ] 00:07:42.434 { 00:07:42.434 "subsystems": [ 00:07:42.434 { 00:07:42.434 "subsystem": "bdev", 00:07:42.434 "config": [ 00:07:42.434 { 00:07:42.434 "params": { 00:07:42.434 "trtype": "pcie", 00:07:42.434 "traddr": "0000:00:06.0", 00:07:42.434 "name": "Nvme0" 00:07:42.434 }, 00:07:42.434 "method": "bdev_nvme_attach_controller" 00:07:42.434 }, 00:07:42.434 { 00:07:42.434 "method": "bdev_wait_for_examine" 00:07:42.434 } 00:07:42.434 ] 00:07:42.434 } 00:07:42.434 ] 00:07:42.434 } 00:07:42.693 [2024-12-05 06:32:37.970023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.693 [2024-12-05 06:32:37.999610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.693  [2024-12-05T06:32:38.417Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:42.951 00:07:42.951 06:32:38 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.951 06:32:38 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:42.951 06:32:38 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:42.951 06:32:38 -- dd/common.sh@11 -- # local nvme_ref= 00:07:42.951 06:32:38 -- dd/common.sh@12 -- # local size=57344 00:07:42.951 06:32:38 -- dd/common.sh@14 -- # local bs=1048576 00:07:42.951 06:32:38 -- dd/common.sh@15 -- # local count=1 00:07:42.951 06:32:38 -- dd/common.sh@18 -- # gen_conf 00:07:42.951 06:32:38 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:42.951 06:32:38 -- dd/common.sh@31 -- # xtrace_disable 00:07:42.951 06:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:42.951 [2024-12-05 06:32:38.308028] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:42.951 [2024-12-05 06:32:38.308133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69563 ] 00:07:42.951 { 00:07:42.951 "subsystems": [ 00:07:42.951 { 00:07:42.951 "subsystem": "bdev", 00:07:42.951 "config": [ 00:07:42.951 { 00:07:42.951 "params": { 00:07:42.951 "trtype": "pcie", 00:07:42.951 "traddr": "0000:00:06.0", 00:07:42.951 "name": "Nvme0" 00:07:42.951 }, 00:07:42.951 "method": "bdev_nvme_attach_controller" 00:07:42.951 }, 00:07:42.951 { 00:07:42.951 "method": "bdev_wait_for_examine" 00:07:42.951 } 00:07:42.951 ] 00:07:42.951 } 00:07:42.951 ] 00:07:42.951 } 00:07:43.209 [2024-12-05 06:32:38.442239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.209 [2024-12-05 06:32:38.471493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.209  [2024-12-05T06:32:38.934Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:43.468 00:07:43.468 06:32:38 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:43.468 06:32:38 -- dd/basic_rw.sh@23 -- # count=7 00:07:43.468 06:32:38 -- dd/basic_rw.sh@24 -- # count=7 00:07:43.468 06:32:38 -- dd/basic_rw.sh@25 -- # size=57344 00:07:43.468 06:32:38 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:43.468 06:32:38 -- dd/common.sh@98 -- # xtrace_disable 00:07:43.468 06:32:38 -- common/autotest_common.sh@10 -- # set +x 00:07:44.035 06:32:39 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:44.035 06:32:39 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:44.035 06:32:39 -- dd/common.sh@31 -- # xtrace_disable 00:07:44.035 06:32:39 -- common/autotest_common.sh@10 -- # set +x 00:07:44.035 [2024-12-05 06:32:39.292128] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:44.035 [2024-12-05 06:32:39.292238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69581 ] 00:07:44.035 { 00:07:44.035 "subsystems": [ 00:07:44.035 { 00:07:44.035 "subsystem": "bdev", 00:07:44.035 "config": [ 00:07:44.035 { 00:07:44.035 "params": { 00:07:44.035 "trtype": "pcie", 00:07:44.035 "traddr": "0000:00:06.0", 00:07:44.035 "name": "Nvme0" 00:07:44.035 }, 00:07:44.035 "method": "bdev_nvme_attach_controller" 00:07:44.035 }, 00:07:44.035 { 00:07:44.035 "method": "bdev_wait_for_examine" 00:07:44.035 } 00:07:44.035 ] 00:07:44.035 } 00:07:44.035 ] 00:07:44.035 } 00:07:44.035 [2024-12-05 06:32:39.428146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.035 [2024-12-05 06:32:39.457875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.293  [2024-12-05T06:32:39.759Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:44.293 00:07:44.293 06:32:39 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:44.293 06:32:39 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:44.293 06:32:39 -- dd/common.sh@31 -- # xtrace_disable 00:07:44.293 06:32:39 -- common/autotest_common.sh@10 -- # set +x 00:07:44.293 [2024-12-05 06:32:39.743702] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:44.293 [2024-12-05 06:32:39.743804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69590 ] 00:07:44.293 { 00:07:44.293 "subsystems": [ 00:07:44.293 { 00:07:44.293 "subsystem": "bdev", 00:07:44.293 "config": [ 00:07:44.293 { 00:07:44.293 "params": { 00:07:44.293 "trtype": "pcie", 00:07:44.293 "traddr": "0000:00:06.0", 00:07:44.293 "name": "Nvme0" 00:07:44.293 }, 00:07:44.293 "method": "bdev_nvme_attach_controller" 00:07:44.293 }, 00:07:44.293 { 00:07:44.293 "method": "bdev_wait_for_examine" 00:07:44.293 } 00:07:44.293 ] 00:07:44.293 } 00:07:44.293 ] 00:07:44.293 } 00:07:44.551 [2024-12-05 06:32:39.879031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.551 [2024-12-05 06:32:39.908767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.810  [2024-12-05T06:32:40.276Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:44.810 00:07:44.810 06:32:40 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.810 06:32:40 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:44.810 06:32:40 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:44.810 06:32:40 -- dd/common.sh@11 -- # local nvme_ref= 00:07:44.810 06:32:40 -- dd/common.sh@12 -- # local size=57344 00:07:44.810 06:32:40 -- dd/common.sh@14 -- # local bs=1048576 00:07:44.810 06:32:40 -- dd/common.sh@15 -- # local count=1 00:07:44.810 06:32:40 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:44.810 06:32:40 -- dd/common.sh@18 -- # gen_conf 00:07:44.810 06:32:40 -- dd/common.sh@31 -- # xtrace_disable 00:07:44.810 06:32:40 -- common/autotest_common.sh@10 -- # set +x 00:07:44.810 [2024-12-05 06:32:40.219086] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:44.810 [2024-12-05 06:32:40.219197] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69607 ] 00:07:44.810 { 00:07:44.810 "subsystems": [ 00:07:44.810 { 00:07:44.810 "subsystem": "bdev", 00:07:44.810 "config": [ 00:07:44.810 { 00:07:44.810 "params": { 00:07:44.810 "trtype": "pcie", 00:07:44.810 "traddr": "0000:00:06.0", 00:07:44.810 "name": "Nvme0" 00:07:44.810 }, 00:07:44.810 "method": "bdev_nvme_attach_controller" 00:07:44.810 }, 00:07:44.810 { 00:07:44.810 "method": "bdev_wait_for_examine" 00:07:44.810 } 00:07:44.810 ] 00:07:44.810 } 00:07:44.810 ] 00:07:44.810 } 00:07:45.070 [2024-12-05 06:32:40.354471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.070 [2024-12-05 06:32:40.384620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.070  [2024-12-05T06:32:40.795Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:45.329 00:07:45.329 06:32:40 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:45.329 06:32:40 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:45.329 06:32:40 -- dd/basic_rw.sh@23 -- # count=3 00:07:45.329 06:32:40 -- dd/basic_rw.sh@24 -- # count=3 00:07:45.329 06:32:40 -- dd/basic_rw.sh@25 -- # size=49152 00:07:45.329 06:32:40 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:45.329 06:32:40 -- dd/common.sh@98 -- # xtrace_disable 00:07:45.329 06:32:40 -- common/autotest_common.sh@10 -- # set +x 00:07:45.897 06:32:41 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:45.897 06:32:41 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:45.897 06:32:41 -- dd/common.sh@31 -- # xtrace_disable 00:07:45.897 06:32:41 -- common/autotest_common.sh@10 -- # set +x 00:07:45.897 [2024-12-05 06:32:41.147992] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:45.897 [2024-12-05 06:32:41.148096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69625 ] 00:07:45.897 { 00:07:45.897 "subsystems": [ 00:07:45.897 { 00:07:45.897 "subsystem": "bdev", 00:07:45.897 "config": [ 00:07:45.897 { 00:07:45.897 "params": { 00:07:45.897 "trtype": "pcie", 00:07:45.897 "traddr": "0000:00:06.0", 00:07:45.897 "name": "Nvme0" 00:07:45.897 }, 00:07:45.897 "method": "bdev_nvme_attach_controller" 00:07:45.897 }, 00:07:45.897 { 00:07:45.897 "method": "bdev_wait_for_examine" 00:07:45.897 } 00:07:45.897 ] 00:07:45.897 } 00:07:45.897 ] 00:07:45.897 } 00:07:45.897 [2024-12-05 06:32:41.280363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.897 [2024-12-05 06:32:41.309877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.156  [2024-12-05T06:32:41.622Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:46.156 00:07:46.156 06:32:41 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:46.156 06:32:41 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:46.156 06:32:41 -- dd/common.sh@31 -- # xtrace_disable 00:07:46.156 06:32:41 -- common/autotest_common.sh@10 -- # set +x 00:07:46.156 [2024-12-05 06:32:41.600862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:46.156 [2024-12-05 06:32:41.600951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69632 ] 00:07:46.156 { 00:07:46.156 "subsystems": [ 00:07:46.156 { 00:07:46.156 "subsystem": "bdev", 00:07:46.156 "config": [ 00:07:46.156 { 00:07:46.156 "params": { 00:07:46.156 "trtype": "pcie", 00:07:46.156 "traddr": "0000:00:06.0", 00:07:46.156 "name": "Nvme0" 00:07:46.156 }, 00:07:46.156 "method": "bdev_nvme_attach_controller" 00:07:46.156 }, 00:07:46.156 { 00:07:46.156 "method": "bdev_wait_for_examine" 00:07:46.156 } 00:07:46.156 ] 00:07:46.156 } 00:07:46.156 ] 00:07:46.156 } 00:07:46.416 [2024-12-05 06:32:41.733450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.416 [2024-12-05 06:32:41.763586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.416  [2024-12-05T06:32:42.141Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:46.675 00:07:46.675 06:32:42 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.675 06:32:42 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:46.675 06:32:42 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:46.675 06:32:42 -- dd/common.sh@11 -- # local nvme_ref= 00:07:46.675 06:32:42 -- dd/common.sh@12 -- # local size=49152 00:07:46.675 06:32:42 -- dd/common.sh@14 -- # local bs=1048576 00:07:46.675 06:32:42 -- dd/common.sh@15 -- # local count=1 00:07:46.675 06:32:42 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:46.675 06:32:42 -- dd/common.sh@18 -- # gen_conf 00:07:46.675 06:32:42 -- dd/common.sh@31 -- # xtrace_disable 00:07:46.675 06:32:42 -- common/autotest_common.sh@10 -- # set +x 00:07:46.675 [2024-12-05 06:32:42.064869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:46.675 [2024-12-05 06:32:42.064955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69651 ] 00:07:46.675 { 00:07:46.675 "subsystems": [ 00:07:46.675 { 00:07:46.675 "subsystem": "bdev", 00:07:46.675 "config": [ 00:07:46.675 { 00:07:46.675 "params": { 00:07:46.675 "trtype": "pcie", 00:07:46.675 "traddr": "0000:00:06.0", 00:07:46.675 "name": "Nvme0" 00:07:46.675 }, 00:07:46.675 "method": "bdev_nvme_attach_controller" 00:07:46.675 }, 00:07:46.675 { 00:07:46.675 "method": "bdev_wait_for_examine" 00:07:46.675 } 00:07:46.675 ] 00:07:46.675 } 00:07:46.675 ] 00:07:46.675 } 00:07:46.934 [2024-12-05 06:32:42.202781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.934 [2024-12-05 06:32:42.232189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.934  [2024-12-05T06:32:42.659Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:47.193 00:07:47.193 06:32:42 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:47.193 06:32:42 -- dd/basic_rw.sh@23 -- # count=3 00:07:47.193 06:32:42 -- dd/basic_rw.sh@24 -- # count=3 00:07:47.193 06:32:42 -- dd/basic_rw.sh@25 -- # size=49152 00:07:47.193 06:32:42 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:47.193 06:32:42 -- dd/common.sh@98 -- # xtrace_disable 00:07:47.193 06:32:42 -- common/autotest_common.sh@10 -- # set +x 00:07:47.760 06:32:42 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:47.760 06:32:42 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:47.760 06:32:42 -- dd/common.sh@31 -- # xtrace_disable 00:07:47.760 06:32:42 -- common/autotest_common.sh@10 -- # set +x 00:07:47.760 [2024-12-05 06:32:42.984834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:47.760 [2024-12-05 06:32:42.985107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69669 ] 00:07:47.760 { 00:07:47.760 "subsystems": [ 00:07:47.760 { 00:07:47.760 "subsystem": "bdev", 00:07:47.760 "config": [ 00:07:47.760 { 00:07:47.760 "params": { 00:07:47.760 "trtype": "pcie", 00:07:47.760 "traddr": "0000:00:06.0", 00:07:47.760 "name": "Nvme0" 00:07:47.760 }, 00:07:47.760 "method": "bdev_nvme_attach_controller" 00:07:47.760 }, 00:07:47.760 { 00:07:47.760 "method": "bdev_wait_for_examine" 00:07:47.760 } 00:07:47.760 ] 00:07:47.760 } 00:07:47.760 ] 00:07:47.760 } 00:07:47.760 [2024-12-05 06:32:43.123123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.760 [2024-12-05 06:32:43.153264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.018  [2024-12-05T06:32:43.484Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:48.018 00:07:48.018 06:32:43 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:48.018 06:32:43 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:48.018 06:32:43 -- dd/common.sh@31 -- # xtrace_disable 00:07:48.018 06:32:43 -- common/autotest_common.sh@10 -- # set +x 00:07:48.018 { 00:07:48.018 "subsystems": [ 00:07:48.018 { 00:07:48.018 "subsystem": "bdev", 00:07:48.018 "config": [ 00:07:48.018 { 00:07:48.018 "params": { 00:07:48.018 "trtype": "pcie", 00:07:48.018 "traddr": "0000:00:06.0", 00:07:48.018 "name": "Nvme0" 00:07:48.018 }, 00:07:48.018 "method": "bdev_nvme_attach_controller" 00:07:48.018 }, 00:07:48.018 { 00:07:48.018 "method": "bdev_wait_for_examine" 00:07:48.018 } 00:07:48.018 ] 00:07:48.018 } 00:07:48.018 ] 00:07:48.018 } 00:07:48.018 [2024-12-05 06:32:43.449443] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:48.018 [2024-12-05 06:32:43.449721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69676 ] 00:07:48.277 [2024-12-05 06:32:43.577466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.277 [2024-12-05 06:32:43.606974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.277  [2024-12-05T06:32:44.002Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:48.536 00:07:48.536 06:32:43 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.536 06:32:43 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:48.536 06:32:43 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:48.536 06:32:43 -- dd/common.sh@11 -- # local nvme_ref= 00:07:48.536 06:32:43 -- dd/common.sh@12 -- # local size=49152 00:07:48.536 06:32:43 -- dd/common.sh@14 -- # local bs=1048576 00:07:48.536 06:32:43 -- dd/common.sh@15 -- # local count=1 00:07:48.536 06:32:43 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:48.536 06:32:43 -- dd/common.sh@18 -- # gen_conf 00:07:48.536 06:32:43 -- dd/common.sh@31 -- # xtrace_disable 00:07:48.536 06:32:43 -- common/autotest_common.sh@10 -- # set +x 00:07:48.536 [2024-12-05 06:32:43.903870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:48.536 [2024-12-05 06:32:43.903956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69684 ] 00:07:48.536 { 00:07:48.536 "subsystems": [ 00:07:48.536 { 00:07:48.536 "subsystem": "bdev", 00:07:48.536 "config": [ 00:07:48.536 { 00:07:48.536 "params": { 00:07:48.536 "trtype": "pcie", 00:07:48.536 "traddr": "0000:00:06.0", 00:07:48.536 "name": "Nvme0" 00:07:48.536 }, 00:07:48.536 "method": "bdev_nvme_attach_controller" 00:07:48.536 }, 00:07:48.536 { 00:07:48.536 "method": "bdev_wait_for_examine" 00:07:48.536 } 00:07:48.536 ] 00:07:48.536 } 00:07:48.536 ] 00:07:48.536 } 00:07:48.813 [2024-12-05 06:32:44.039779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.813 [2024-12-05 06:32:44.069129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.813  [2024-12-05T06:32:44.538Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:49.072 00:07:49.072 ************************************ 00:07:49.072 END TEST dd_rw 00:07:49.072 ************************************ 00:07:49.072 00:07:49.072 real 0m11.446s 00:07:49.072 user 0m8.341s 00:07:49.072 sys 0m2.001s 00:07:49.072 06:32:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.072 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:07:49.072 06:32:44 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:49.072 06:32:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.072 06:32:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.072 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:07:49.072 ************************************ 00:07:49.072 START TEST dd_rw_offset 00:07:49.072 ************************************ 00:07:49.072 06:32:44 -- common/autotest_common.sh@1114 -- # basic_offset 00:07:49.072 06:32:44 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:49.072 06:32:44 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:49.072 06:32:44 -- dd/common.sh@98 -- # xtrace_disable 00:07:49.072 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:07:49.072 06:32:44 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:49.072 06:32:44 -- dd/basic_rw.sh@56 -- # data=bk9lm0720pnhhe5aeojx9l2n18eri84lzxsonby0bf1y4t2h69he1sen0bkt0jka9uze93meai9esyrmrgkl39ql0q4k4661u2u2oiwlnrhtznf3b4qxw4x98i2fodrm7rllah7lnlim51f9h5vlq9cxhd043qbt20c6jg7umw9rax7h1chlkmbe4xxko5ge8soj42809refygwmpu4for0wg26w9lyqxbk23p5snjxc7iifta14n591261gqc70e69djr9qg3s9kat7wquil0wmew104x6eq9hsstp6pmuq5n3qerbhu5hipe8iisrth2pnog9743cxzbsih1eesjamjgfvvx75b4phxxput49fo3i2plum10gapbvum6r9ulog3v6b2y6z5h2brc0k1jv7myc2bmhes3o3kqka4gd3cy3itxch3ljjczt3ekdyfu57yf536iq2s9ff3q6ohsccei9nm02gl7lq4ux0qilpuvvco27tmmex751as5m7m6eugne1z1cmuaag6ga6og6rl0heb5jlhkitrnalp4bh0ci6orzr4rapk5roi1ypsl0qi73jyld7nix55ylu1wy90fz90vuraihygqvuc11n9v3q0mtdogk123o9zczsb416uae258xu5ll94w4apc7gdg21yq9jgadvi6izperrga1n4aexj12qcmd3d95xksd4ynyghy9w1s5241celjigufrlavjh4j8b4ug4x8g1or6oyqpwsw3cu62p7uuti2gu5ier7c37hzsc0r78qs83pzzyzkmm7erzqnscy8s5tkzoq0asoycbkdhox3zlojxiqmhgv1jnconpskx57pckjfsjb7uwlc4ucywp4zinbpkei4y26dfk2hk9tmiurlspl7o0i7bpx9eetrrihgfdzse5bhjnudf973f3d1roaqzlxdn1l91xjdbwk87jo9o3i4qwnw5havi9pc7qjqok91ck1p841oabrn63ttuz0pl9o95m7nxf73dl88qlkp8adh612lw5znbh6zvd1q1r0kbztz63amza34u3qwo0qawn0xihspb7tohqba7yu43zhicx5v3vdwvvtw8m7pua393bk9csgwfrj2p7q0ozahu1bectg0tr6v3zlq3jq2cmjg76pk2wlhnmxlfbwntab80cb718hkxq80actbhigdj2zj8vcml4l7sxfb1x2r3hqjwwu3mf6fhbm34obyxl9hspn6co7n6cmps6c4p658jujqoouta8j5pouut4jrv1ubys528qfhsiy7foyq12c76gobacdyk4dqc5cpxpjpk6ykq4rast272keen33z6mzy95avxbfsaoqn6jzawaufitmbxhuplz01lp3nujxrbeg6c75nhc6bis4ua6w0ge8zb0w7out37fpkz8a22m2nsf50lqx9xzh4obuq6fqprtgioc5qzfyhhosgkppklu3ss1yk6f25l9a1mucifq2mlul7izgi4tixbr2s3bd5di5291hx1khnhdgdz2i650f56aa4fo4qow8t3iow9zq2j2bwsf1ndsmj67m3anlfi7q6nmtpxp10pxqe62j6f7zrj9kmdu57nx333qsk8t9fklk3ztu0elaaanq102qsy83rftoagdfyfqiuz1dx09826lizqfgjzodr7ajkagfop30vzpr9ewbpspjuv1nu36ymu09kraznxa5wlmzw9yoys0be6m5k6p8vh4umtlcwpw140jbo2utdq4zco2xn1txvzw2eg23akdy6nqw3w2y6sydcu19aofq68sgo36j1z0yd7muk9ii884zooo497zqa4quqn9klabrpzdznci7cyfb4m2shp3548a3kcj7l0kfijd5714zyonibufwdeon9obumd7u5xzsgj8y3o75ecaxkttrxctf9kqgipsgmxkd0p53fhxehao9kh57d5368pnsueuv9fehxsfdykuv7cv42ttx5rg7mfrwnmiyy0b0dwa39gcc5genhgn7y70hcniciyresru3rentktplfwrrkaba2sc40qb7y2cvsctkydllah6q6y0aas74og4xv9e02wsy5ln6kc0fy9uu2r8jimz4mdr9flmyphnhtf3agi7bu8ow3tw0wg6k6t1fwzwi49kjacz0tig6mmlvztb34egv1kllrhiqspebrjuuj8pfylvapuv9lg8w192mrl5tkfjebt57hb5jr86vovrcsmfwvasjzpg3ny6xe33msc4z9obwi2aqieho783wvzgbrtnqak6c7ob332fvbia4vpjp73zhrdkv9fbfdkxu32k028baarvd524a4t6v3mmfvfbam7mgpmn6l4ibax4zopp692v5wjk64a5m97hunovhbd7kovo2ps93w0svbzwf3wz4hq5gl1vj1nmrqog1ox0sbgkw7rtfmg1vsogn6x8jbyalg134md0d7o04rmyxq1mu95ndt2k2g96o9tycpr67c43rxeczcwrx8nbszitug3s9g4v76evrhccg4i23twnu0jdz4lkrxfwdaow02v093kpmbbe9j9gpdyx1h80l0yatzehfsuy7wu0b9uub7bscg4hc80w6ta8brwxtrfjp96g95n5ck6wp8smipvw5y9yj2bt220xveux7588piu9mhddw29234nwrtbmq4nn0tdo0xw26937dnb9ha69ozrtrgbg5tifo3zceqc8a3lftl8im7comjyu2k2v1xg7ym45eo0fegls099xhh534jhklo9zgrsbwc72ifmegu36iup4l4u6ok6ttne2xbdhmvgdk7uqxje5egrvdoc8oo0fn6onjihk3lrh46vztkbak84lybn73g6pnr237n7chyt00k738noxnaxnqu6ivk9j1bi4ozrt4kkj69oh7qwhr815nfkjyj3hs8waimoe1npe79eb62juo8jzzqik65nrxyne93phnp0wi2et59rg44fd2qb9mh2xzc5rsyl8od1a5bre8l15xpwrv2ilfmfrzma492dv6axvz9j3f3ri7n0fmjl2nvfgy1q2bcdehjupbl3jeml1vbb248mczdtqmwl7jxntpx3qy68xrpqn0apsraofd7fyiaxo3sbcnbybk4r36dnhnbdbgcmxdnzvqwdnbgna1fkverhd9l6bzlpmg17mwz9cm8wv4fsvtjl3yxhiluhr0v2us2vfijazfq2tefyyy0iawt46ofx1me494u4wu4ddeuk26ewb08ym7f7nt8bacrmbthrzfkgx910s2mvmwbfx8ehg3538gsrs1sbr8yyy1kp4g6i3g1o8qvwy33n3r5kn9vs0lmswrb30tvhr2kszioz657gjrfyyschd2hlz1xa4c9sfhgqq36tapuces5rqs0h3j7tfwfm85yjhi7wkpi5jky9g07j9l3xtznrh9l5ieaku6pumfgvgfrbfiaayt58udewabyyyl6d3hh908ef96sk5zqr3p33s3r0zs0tmncstqjtyn91x9fv0syh9g1h1gx04m509exaecqajqgqpy0o13r7pcseuvbhz7egcpz8px3j4su1lljpijp5g6ubx844aer0h440x83cs641o7m5nw0cysa8vrpbmwoquxjzgm52bo31vda6o3yrh3x4hbd3rilpro2prx0sejouu49gvzchchej7g3g4w1m16wrvsbuzvxb9k7kj4s9mvm3h0x0poer8t108lgd54qvhhphye2anxwwq9q72jpbgq2wclna10omqa9c2qvi1fbobpx7mi0uf754ajtl2e9998qig691c52kk1gbk06h6o43ft8kwzuh1r70jwd33plmr850sb3is0ecnddocn1f3n4u4egb4daa49xid7ifkf7394ejc5puvw5jl4gi5nvcck8y83dbake7i9sra84yksh2a1bwbaitqf4xfwqfy2jz6byds7tcbkrr53u5jw9954pi4dqrl1wnj4ts13pi255fqh63oeal31s6d5g4uetpcqzf6c02rzrs7sh3gba2z32mqetvs2z2enotbwm443sohl0v2vtms 00:07:49.073 06:32:44 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:49.073 06:32:44 -- dd/basic_rw.sh@59 -- # gen_conf 00:07:49.073 06:32:44 -- dd/common.sh@31 -- # xtrace_disable 00:07:49.073 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:07:49.073 [2024-12-05 06:32:44.478572] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:49.073 [2024-12-05 06:32:44.478837] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69719 ] 00:07:49.073 { 00:07:49.073 "subsystems": [ 00:07:49.073 { 00:07:49.073 "subsystem": "bdev", 00:07:49.073 "config": [ 00:07:49.073 { 00:07:49.073 "params": { 00:07:49.073 "trtype": "pcie", 00:07:49.073 "traddr": "0000:00:06.0", 00:07:49.073 "name": "Nvme0" 00:07:49.073 }, 00:07:49.073 "method": "bdev_nvme_attach_controller" 00:07:49.073 }, 00:07:49.073 { 00:07:49.073 "method": "bdev_wait_for_examine" 00:07:49.073 } 00:07:49.073 ] 00:07:49.073 } 00:07:49.073 ] 00:07:49.073 } 00:07:49.331 [2024-12-05 06:32:44.615501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.331 [2024-12-05 06:32:44.644283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.331  [2024-12-05T06:32:45.056Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:49.590 00:07:49.590 06:32:44 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:49.590 06:32:44 -- dd/basic_rw.sh@65 -- # gen_conf 00:07:49.590 06:32:44 -- dd/common.sh@31 -- # xtrace_disable 00:07:49.590 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:07:49.590 { 00:07:49.590 "subsystems": [ 00:07:49.590 { 00:07:49.590 "subsystem": "bdev", 00:07:49.590 "config": [ 00:07:49.590 { 00:07:49.590 "params": { 00:07:49.590 "trtype": "pcie", 00:07:49.590 "traddr": "0000:00:06.0", 00:07:49.590 "name": "Nvme0" 00:07:49.590 }, 00:07:49.590 "method": "bdev_nvme_attach_controller" 00:07:49.590 }, 00:07:49.590 { 00:07:49.590 "method": "bdev_wait_for_examine" 00:07:49.590 } 00:07:49.590 ] 00:07:49.590 } 00:07:49.590 ] 00:07:49.590 } 00:07:49.590 [2024-12-05 06:32:44.935903] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:49.590 [2024-12-05 06:32:44.935997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69728 ] 00:07:49.848 [2024-12-05 06:32:45.071399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.848 [2024-12-05 06:32:45.100605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.848  [2024-12-05T06:32:45.573Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:50.107 00:07:50.107 06:32:45 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:50.107 ************************************ 00:07:50.107 END TEST dd_rw_offset 00:07:50.107 ************************************ 00:07:50.108 06:32:45 -- dd/basic_rw.sh@72 -- # [[ bk9lm0720pnhhe5aeojx9l2n18eri84lzxsonby0bf1y4t2h69he1sen0bkt0jka9uze93meai9esyrmrgkl39ql0q4k4661u2u2oiwlnrhtznf3b4qxw4x98i2fodrm7rllah7lnlim51f9h5vlq9cxhd043qbt20c6jg7umw9rax7h1chlkmbe4xxko5ge8soj42809refygwmpu4for0wg26w9lyqxbk23p5snjxc7iifta14n591261gqc70e69djr9qg3s9kat7wquil0wmew104x6eq9hsstp6pmuq5n3qerbhu5hipe8iisrth2pnog9743cxzbsih1eesjamjgfvvx75b4phxxput49fo3i2plum10gapbvum6r9ulog3v6b2y6z5h2brc0k1jv7myc2bmhes3o3kqka4gd3cy3itxch3ljjczt3ekdyfu57yf536iq2s9ff3q6ohsccei9nm02gl7lq4ux0qilpuvvco27tmmex751as5m7m6eugne1z1cmuaag6ga6og6rl0heb5jlhkitrnalp4bh0ci6orzr4rapk5roi1ypsl0qi73jyld7nix55ylu1wy90fz90vuraihygqvuc11n9v3q0mtdogk123o9zczsb416uae258xu5ll94w4apc7gdg21yq9jgadvi6izperrga1n4aexj12qcmd3d95xksd4ynyghy9w1s5241celjigufrlavjh4j8b4ug4x8g1or6oyqpwsw3cu62p7uuti2gu5ier7c37hzsc0r78qs83pzzyzkmm7erzqnscy8s5tkzoq0asoycbkdhox3zlojxiqmhgv1jnconpskx57pckjfsjb7uwlc4ucywp4zinbpkei4y26dfk2hk9tmiurlspl7o0i7bpx9eetrrihgfdzse5bhjnudf973f3d1roaqzlxdn1l91xjdbwk87jo9o3i4qwnw5havi9pc7qjqok91ck1p841oabrn63ttuz0pl9o95m7nxf73dl88qlkp8adh612lw5znbh6zvd1q1r0kbztz63amza34u3qwo0qawn0xihspb7tohqba7yu43zhicx5v3vdwvvtw8m7pua393bk9csgwfrj2p7q0ozahu1bectg0tr6v3zlq3jq2cmjg76pk2wlhnmxlfbwntab80cb718hkxq80actbhigdj2zj8vcml4l7sxfb1x2r3hqjwwu3mf6fhbm34obyxl9hspn6co7n6cmps6c4p658jujqoouta8j5pouut4jrv1ubys528qfhsiy7foyq12c76gobacdyk4dqc5cpxpjpk6ykq4rast272keen33z6mzy95avxbfsaoqn6jzawaufitmbxhuplz01lp3nujxrbeg6c75nhc6bis4ua6w0ge8zb0w7out37fpkz8a22m2nsf50lqx9xzh4obuq6fqprtgioc5qzfyhhosgkppklu3ss1yk6f25l9a1mucifq2mlul7izgi4tixbr2s3bd5di5291hx1khnhdgdz2i650f56aa4fo4qow8t3iow9zq2j2bwsf1ndsmj67m3anlfi7q6nmtpxp10pxqe62j6f7zrj9kmdu57nx333qsk8t9fklk3ztu0elaaanq102qsy83rftoagdfyfqiuz1dx09826lizqfgjzodr7ajkagfop30vzpr9ewbpspjuv1nu36ymu09kraznxa5wlmzw9yoys0be6m5k6p8vh4umtlcwpw140jbo2utdq4zco2xn1txvzw2eg23akdy6nqw3w2y6sydcu19aofq68sgo36j1z0yd7muk9ii884zooo497zqa4quqn9klabrpzdznci7cyfb4m2shp3548a3kcj7l0kfijd5714zyonibufwdeon9obumd7u5xzsgj8y3o75ecaxkttrxctf9kqgipsgmxkd0p53fhxehao9kh57d5368pnsueuv9fehxsfdykuv7cv42ttx5rg7mfrwnmiyy0b0dwa39gcc5genhgn7y70hcniciyresru3rentktplfwrrkaba2sc40qb7y2cvsctkydllah6q6y0aas74og4xv9e02wsy5ln6kc0fy9uu2r8jimz4mdr9flmyphnhtf3agi7bu8ow3tw0wg6k6t1fwzwi49kjacz0tig6mmlvztb34egv1kllrhiqspebrjuuj8pfylvapuv9lg8w192mrl5tkfjebt57hb5jr86vovrcsmfwvasjzpg3ny6xe33msc4z9obwi2aqieho783wvzgbrtnqak6c7ob332fvbia4vpjp73zhrdkv9fbfdkxu32k028baarvd524a4t6v3mmfvfbam7mgpmn6l4ibax4zopp692v5wjk64a5m97hunovhbd7kovo2ps93w0svbzwf3wz4hq5gl1vj1nmrqog1ox0sbgkw7rtfmg1vsogn6x8jbyalg134md0d7o04rmyxq1mu95ndt2k2g96o9tycpr67c43rxeczcwrx8nbszitug3s9g4v76evrhccg4i23twnu0jdz4lkrxfwdaow02v093kpmbbe9j9gpdyx1h80l0yatzehfsuy7wu0b9uub7bscg4hc80w6ta8brwxtrfjp96g95n5ck6wp8smipvw5y9yj2bt220xveux7588piu9mhddw29234nwrtbmq4nn0tdo0xw26937dnb9ha69ozrtrgbg5tifo3zceqc8a3lftl8im7comjyu2k2v1xg7ym45eo0fegls099xhh534jhklo9zgrsbwc72ifmegu36iup4l4u6ok6ttne2xbdhmvgdk7uqxje5egrvdoc8oo0fn6onjihk3lrh46vztkbak84lybn73g6pnr237n7chyt00k738noxnaxnqu6ivk9j1bi4ozrt4kkj69oh7qwhr815nfkjyj3hs8waimoe1npe79eb62juo8jzzqik65nrxyne93phnp0wi2et59rg44fd2qb9mh2xzc5rsyl8od1a5bre8l15xpwrv2ilfmfrzma492dv6axvz9j3f3ri7n0fmjl2nvfgy1q2bcdehjupbl3jeml1vbb248mczdtqmwl7jxntpx3qy68xrpqn0apsraofd7fyiaxo3sbcnbybk4r36dnhnbdbgcmxdnzvqwdnbgna1fkverhd9l6bzlpmg17mwz9cm8wv4fsvtjl3yxhiluhr0v2us2vfijazfq2tefyyy0iawt46ofx1me494u4wu4ddeuk26ewb08ym7f7nt8bacrmbthrzfkgx910s2mvmwbfx8ehg3538gsrs1sbr8yyy1kp4g6i3g1o8qvwy33n3r5kn9vs0lmswrb30tvhr2kszioz657gjrfyyschd2hlz1xa4c9sfhgqq36tapuces5rqs0h3j7tfwfm85yjhi7wkpi5jky9g07j9l3xtznrh9l5ieaku6pumfgvgfrbfiaayt58udewabyyyl6d3hh908ef96sk5zqr3p33s3r0zs0tmncstqjtyn91x9fv0syh9g1h1gx04m509exaecqajqgqpy0o13r7pcseuvbhz7egcpz8px3j4su1lljpijp5g6ubx844aer0h440x83cs641o7m5nw0cysa8vrpbmwoquxjzgm52bo31vda6o3yrh3x4hbd3rilpro2prx0sejouu49gvzchchej7g3g4w1m16wrvsbuzvxb9k7kj4s9mvm3h0x0poer8t108lgd54qvhhphye2anxwwq9q72jpbgq2wclna10omqa9c2qvi1fbobpx7mi0uf754ajtl2e9998qig691c52kk1gbk06h6o43ft8kwzuh1r70jwd33plmr850sb3is0ecnddocn1f3n4u4egb4daa49xid7ifkf7394ejc5puvw5jl4gi5nvcck8y83dbake7i9sra84yksh2a1bwbaitqf4xfwqfy2jz6byds7tcbkrr53u5jw9954pi4dqrl1wnj4ts13pi255fqh63oeal31s6d5g4uetpcqzf6c02rzrs7sh3gba2z32mqetvs2z2enotbwm443sohl0v2vtms == \b\k\9\l\m\0\7\2\0\p\n\h\h\e\5\a\e\o\j\x\9\l\2\n\1\8\e\r\i\8\4\l\z\x\s\o\n\b\y\0\b\f\1\y\4\t\2\h\6\9\h\e\1\s\e\n\0\b\k\t\0\j\k\a\9\u\z\e\9\3\m\e\a\i\9\e\s\y\r\m\r\g\k\l\3\9\q\l\0\q\4\k\4\6\6\1\u\2\u\2\o\i\w\l\n\r\h\t\z\n\f\3\b\4\q\x\w\4\x\9\8\i\2\f\o\d\r\m\7\r\l\l\a\h\7\l\n\l\i\m\5\1\f\9\h\5\v\l\q\9\c\x\h\d\0\4\3\q\b\t\2\0\c\6\j\g\7\u\m\w\9\r\a\x\7\h\1\c\h\l\k\m\b\e\4\x\x\k\o\5\g\e\8\s\o\j\4\2\8\0\9\r\e\f\y\g\w\m\p\u\4\f\o\r\0\w\g\2\6\w\9\l\y\q\x\b\k\2\3\p\5\s\n\j\x\c\7\i\i\f\t\a\1\4\n\5\9\1\2\6\1\g\q\c\7\0\e\6\9\d\j\r\9\q\g\3\s\9\k\a\t\7\w\q\u\i\l\0\w\m\e\w\1\0\4\x\6\e\q\9\h\s\s\t\p\6\p\m\u\q\5\n\3\q\e\r\b\h\u\5\h\i\p\e\8\i\i\s\r\t\h\2\p\n\o\g\9\7\4\3\c\x\z\b\s\i\h\1\e\e\s\j\a\m\j\g\f\v\v\x\7\5\b\4\p\h\x\x\p\u\t\4\9\f\o\3\i\2\p\l\u\m\1\0\g\a\p\b\v\u\m\6\r\9\u\l\o\g\3\v\6\b\2\y\6\z\5\h\2\b\r\c\0\k\1\j\v\7\m\y\c\2\b\m\h\e\s\3\o\3\k\q\k\a\4\g\d\3\c\y\3\i\t\x\c\h\3\l\j\j\c\z\t\3\e\k\d\y\f\u\5\7\y\f\5\3\6\i\q\2\s\9\f\f\3\q\6\o\h\s\c\c\e\i\9\n\m\0\2\g\l\7\l\q\4\u\x\0\q\i\l\p\u\v\v\c\o\2\7\t\m\m\e\x\7\5\1\a\s\5\m\7\m\6\e\u\g\n\e\1\z\1\c\m\u\a\a\g\6\g\a\6\o\g\6\r\l\0\h\e\b\5\j\l\h\k\i\t\r\n\a\l\p\4\b\h\0\c\i\6\o\r\z\r\4\r\a\p\k\5\r\o\i\1\y\p\s\l\0\q\i\7\3\j\y\l\d\7\n\i\x\5\5\y\l\u\1\w\y\9\0\f\z\9\0\v\u\r\a\i\h\y\g\q\v\u\c\1\1\n\9\v\3\q\0\m\t\d\o\g\k\1\2\3\o\9\z\c\z\s\b\4\1\6\u\a\e\2\5\8\x\u\5\l\l\9\4\w\4\a\p\c\7\g\d\g\2\1\y\q\9\j\g\a\d\v\i\6\i\z\p\e\r\r\g\a\1\n\4\a\e\x\j\1\2\q\c\m\d\3\d\9\5\x\k\s\d\4\y\n\y\g\h\y\9\w\1\s\5\2\4\1\c\e\l\j\i\g\u\f\r\l\a\v\j\h\4\j\8\b\4\u\g\4\x\8\g\1\o\r\6\o\y\q\p\w\s\w\3\c\u\6\2\p\7\u\u\t\i\2\g\u\5\i\e\r\7\c\3\7\h\z\s\c\0\r\7\8\q\s\8\3\p\z\z\y\z\k\m\m\7\e\r\z\q\n\s\c\y\8\s\5\t\k\z\o\q\0\a\s\o\y\c\b\k\d\h\o\x\3\z\l\o\j\x\i\q\m\h\g\v\1\j\n\c\o\n\p\s\k\x\5\7\p\c\k\j\f\s\j\b\7\u\w\l\c\4\u\c\y\w\p\4\z\i\n\b\p\k\e\i\4\y\2\6\d\f\k\2\h\k\9\t\m\i\u\r\l\s\p\l\7\o\0\i\7\b\p\x\9\e\e\t\r\r\i\h\g\f\d\z\s\e\5\b\h\j\n\u\d\f\9\7\3\f\3\d\1\r\o\a\q\z\l\x\d\n\1\l\9\1\x\j\d\b\w\k\8\7\j\o\9\o\3\i\4\q\w\n\w\5\h\a\v\i\9\p\c\7\q\j\q\o\k\9\1\c\k\1\p\8\4\1\o\a\b\r\n\6\3\t\t\u\z\0\p\l\9\o\9\5\m\7\n\x\f\7\3\d\l\8\8\q\l\k\p\8\a\d\h\6\1\2\l\w\5\z\n\b\h\6\z\v\d\1\q\1\r\0\k\b\z\t\z\6\3\a\m\z\a\3\4\u\3\q\w\o\0\q\a\w\n\0\x\i\h\s\p\b\7\t\o\h\q\b\a\7\y\u\4\3\z\h\i\c\x\5\v\3\v\d\w\v\v\t\w\8\m\7\p\u\a\3\9\3\b\k\9\c\s\g\w\f\r\j\2\p\7\q\0\o\z\a\h\u\1\b\e\c\t\g\0\t\r\6\v\3\z\l\q\3\j\q\2\c\m\j\g\7\6\p\k\2\w\l\h\n\m\x\l\f\b\w\n\t\a\b\8\0\c\b\7\1\8\h\k\x\q\8\0\a\c\t\b\h\i\g\d\j\2\z\j\8\v\c\m\l\4\l\7\s\x\f\b\1\x\2\r\3\h\q\j\w\w\u\3\m\f\6\f\h\b\m\3\4\o\b\y\x\l\9\h\s\p\n\6\c\o\7\n\6\c\m\p\s\6\c\4\p\6\5\8\j\u\j\q\o\o\u\t\a\8\j\5\p\o\u\u\t\4\j\r\v\1\u\b\y\s\5\2\8\q\f\h\s\i\y\7\f\o\y\q\1\2\c\7\6\g\o\b\a\c\d\y\k\4\d\q\c\5\c\p\x\p\j\p\k\6\y\k\q\4\r\a\s\t\2\7\2\k\e\e\n\3\3\z\6\m\z\y\9\5\a\v\x\b\f\s\a\o\q\n\6\j\z\a\w\a\u\f\i\t\m\b\x\h\u\p\l\z\0\1\l\p\3\n\u\j\x\r\b\e\g\6\c\7\5\n\h\c\6\b\i\s\4\u\a\6\w\0\g\e\8\z\b\0\w\7\o\u\t\3\7\f\p\k\z\8\a\2\2\m\2\n\s\f\5\0\l\q\x\9\x\z\h\4\o\b\u\q\6\f\q\p\r\t\g\i\o\c\5\q\z\f\y\h\h\o\s\g\k\p\p\k\l\u\3\s\s\1\y\k\6\f\2\5\l\9\a\1\m\u\c\i\f\q\2\m\l\u\l\7\i\z\g\i\4\t\i\x\b\r\2\s\3\b\d\5\d\i\5\2\9\1\h\x\1\k\h\n\h\d\g\d\z\2\i\6\5\0\f\5\6\a\a\4\f\o\4\q\o\w\8\t\3\i\o\w\9\z\q\2\j\2\b\w\s\f\1\n\d\s\m\j\6\7\m\3\a\n\l\f\i\7\q\6\n\m\t\p\x\p\1\0\p\x\q\e\6\2\j\6\f\7\z\r\j\9\k\m\d\u\5\7\n\x\3\3\3\q\s\k\8\t\9\f\k\l\k\3\z\t\u\0\e\l\a\a\a\n\q\1\0\2\q\s\y\8\3\r\f\t\o\a\g\d\f\y\f\q\i\u\z\1\d\x\0\9\8\2\6\l\i\z\q\f\g\j\z\o\d\r\7\a\j\k\a\g\f\o\p\3\0\v\z\p\r\9\e\w\b\p\s\p\j\u\v\1\n\u\3\6\y\m\u\0\9\k\r\a\z\n\x\a\5\w\l\m\z\w\9\y\o\y\s\0\b\e\6\m\5\k\6\p\8\v\h\4\u\m\t\l\c\w\p\w\1\4\0\j\b\o\2\u\t\d\q\4\z\c\o\2\x\n\1\t\x\v\z\w\2\e\g\2\3\a\k\d\y\6\n\q\w\3\w\2\y\6\s\y\d\c\u\1\9\a\o\f\q\6\8\s\g\o\3\6\j\1\z\0\y\d\7\m\u\k\9\i\i\8\8\4\z\o\o\o\4\9\7\z\q\a\4\q\u\q\n\9\k\l\a\b\r\p\z\d\z\n\c\i\7\c\y\f\b\4\m\2\s\h\p\3\5\4\8\a\3\k\c\j\7\l\0\k\f\i\j\d\5\7\1\4\z\y\o\n\i\b\u\f\w\d\e\o\n\9\o\b\u\m\d\7\u\5\x\z\s\g\j\8\y\3\o\7\5\e\c\a\x\k\t\t\r\x\c\t\f\9\k\q\g\i\p\s\g\m\x\k\d\0\p\5\3\f\h\x\e\h\a\o\9\k\h\5\7\d\5\3\6\8\p\n\s\u\e\u\v\9\f\e\h\x\s\f\d\y\k\u\v\7\c\v\4\2\t\t\x\5\r\g\7\m\f\r\w\n\m\i\y\y\0\b\0\d\w\a\3\9\g\c\c\5\g\e\n\h\g\n\7\y\7\0\h\c\n\i\c\i\y\r\e\s\r\u\3\r\e\n\t\k\t\p\l\f\w\r\r\k\a\b\a\2\s\c\4\0\q\b\7\y\2\c\v\s\c\t\k\y\d\l\l\a\h\6\q\6\y\0\a\a\s\7\4\o\g\4\x\v\9\e\0\2\w\s\y\5\l\n\6\k\c\0\f\y\9\u\u\2\r\8\j\i\m\z\4\m\d\r\9\f\l\m\y\p\h\n\h\t\f\3\a\g\i\7\b\u\8\o\w\3\t\w\0\w\g\6\k\6\t\1\f\w\z\w\i\4\9\k\j\a\c\z\0\t\i\g\6\m\m\l\v\z\t\b\3\4\e\g\v\1\k\l\l\r\h\i\q\s\p\e\b\r\j\u\u\j\8\p\f\y\l\v\a\p\u\v\9\l\g\8\w\1\9\2\m\r\l\5\t\k\f\j\e\b\t\5\7\h\b\5\j\r\8\6\v\o\v\r\c\s\m\f\w\v\a\s\j\z\p\g\3\n\y\6\x\e\3\3\m\s\c\4\z\9\o\b\w\i\2\a\q\i\e\h\o\7\8\3\w\v\z\g\b\r\t\n\q\a\k\6\c\7\o\b\3\3\2\f\v\b\i\a\4\v\p\j\p\7\3\z\h\r\d\k\v\9\f\b\f\d\k\x\u\3\2\k\0\2\8\b\a\a\r\v\d\5\2\4\a\4\t\6\v\3\m\m\f\v\f\b\a\m\7\m\g\p\m\n\6\l\4\i\b\a\x\4\z\o\p\p\6\9\2\v\5\w\j\k\6\4\a\5\m\9\7\h\u\n\o\v\h\b\d\7\k\o\v\o\2\p\s\9\3\w\0\s\v\b\z\w\f\3\w\z\4\h\q\5\g\l\1\v\j\1\n\m\r\q\o\g\1\o\x\0\s\b\g\k\w\7\r\t\f\m\g\1\v\s\o\g\n\6\x\8\j\b\y\a\l\g\1\3\4\m\d\0\d\7\o\0\4\r\m\y\x\q\1\m\u\9\5\n\d\t\2\k\2\g\9\6\o\9\t\y\c\p\r\6\7\c\4\3\r\x\e\c\z\c\w\r\x\8\n\b\s\z\i\t\u\g\3\s\9\g\4\v\7\6\e\v\r\h\c\c\g\4\i\2\3\t\w\n\u\0\j\d\z\4\l\k\r\x\f\w\d\a\o\w\0\2\v\0\9\3\k\p\m\b\b\e\9\j\9\g\p\d\y\x\1\h\8\0\l\0\y\a\t\z\e\h\f\s\u\y\7\w\u\0\b\9\u\u\b\7\b\s\c\g\4\h\c\8\0\w\6\t\a\8\b\r\w\x\t\r\f\j\p\9\6\g\9\5\n\5\c\k\6\w\p\8\s\m\i\p\v\w\5\y\9\y\j\2\b\t\2\2\0\x\v\e\u\x\7\5\8\8\p\i\u\9\m\h\d\d\w\2\9\2\3\4\n\w\r\t\b\m\q\4\n\n\0\t\d\o\0\x\w\2\6\9\3\7\d\n\b\9\h\a\6\9\o\z\r\t\r\g\b\g\5\t\i\f\o\3\z\c\e\q\c\8\a\3\l\f\t\l\8\i\m\7\c\o\m\j\y\u\2\k\2\v\1\x\g\7\y\m\4\5\e\o\0\f\e\g\l\s\0\9\9\x\h\h\5\3\4\j\h\k\l\o\9\z\g\r\s\b\w\c\7\2\i\f\m\e\g\u\3\6\i\u\p\4\l\4\u\6\o\k\6\t\t\n\e\2\x\b\d\h\m\v\g\d\k\7\u\q\x\j\e\5\e\g\r\v\d\o\c\8\o\o\0\f\n\6\o\n\j\i\h\k\3\l\r\h\4\6\v\z\t\k\b\a\k\8\4\l\y\b\n\7\3\g\6\p\n\r\2\3\7\n\7\c\h\y\t\0\0\k\7\3\8\n\o\x\n\a\x\n\q\u\6\i\v\k\9\j\1\b\i\4\o\z\r\t\4\k\k\j\6\9\o\h\7\q\w\h\r\8\1\5\n\f\k\j\y\j\3\h\s\8\w\a\i\m\o\e\1\n\p\e\7\9\e\b\6\2\j\u\o\8\j\z\z\q\i\k\6\5\n\r\x\y\n\e\9\3\p\h\n\p\0\w\i\2\e\t\5\9\r\g\4\4\f\d\2\q\b\9\m\h\2\x\z\c\5\r\s\y\l\8\o\d\1\a\5\b\r\e\8\l\1\5\x\p\w\r\v\2\i\l\f\m\f\r\z\m\a\4\9\2\d\v\6\a\x\v\z\9\j\3\f\3\r\i\7\n\0\f\m\j\l\2\n\v\f\g\y\1\q\2\b\c\d\e\h\j\u\p\b\l\3\j\e\m\l\1\v\b\b\2\4\8\m\c\z\d\t\q\m\w\l\7\j\x\n\t\p\x\3\q\y\6\8\x\r\p\q\n\0\a\p\s\r\a\o\f\d\7\f\y\i\a\x\o\3\s\b\c\n\b\y\b\k\4\r\3\6\d\n\h\n\b\d\b\g\c\m\x\d\n\z\v\q\w\d\n\b\g\n\a\1\f\k\v\e\r\h\d\9\l\6\b\z\l\p\m\g\1\7\m\w\z\9\c\m\8\w\v\4\f\s\v\t\j\l\3\y\x\h\i\l\u\h\r\0\v\2\u\s\2\v\f\i\j\a\z\f\q\2\t\e\f\y\y\y\0\i\a\w\t\4\6\o\f\x\1\m\e\4\9\4\u\4\w\u\4\d\d\e\u\k\2\6\e\w\b\0\8\y\m\7\f\7\n\t\8\b\a\c\r\m\b\t\h\r\z\f\k\g\x\9\1\0\s\2\m\v\m\w\b\f\x\8\e\h\g\3\5\3\8\g\s\r\s\1\s\b\r\8\y\y\y\1\k\p\4\g\6\i\3\g\1\o\8\q\v\w\y\3\3\n\3\r\5\k\n\9\v\s\0\l\m\s\w\r\b\3\0\t\v\h\r\2\k\s\z\i\o\z\6\5\7\g\j\r\f\y\y\s\c\h\d\2\h\l\z\1\x\a\4\c\9\s\f\h\g\q\q\3\6\t\a\p\u\c\e\s\5\r\q\s\0\h\3\j\7\t\f\w\f\m\8\5\y\j\h\i\7\w\k\p\i\5\j\k\y\9\g\0\7\j\9\l\3\x\t\z\n\r\h\9\l\5\i\e\a\k\u\6\p\u\m\f\g\v\g\f\r\b\f\i\a\a\y\t\5\8\u\d\e\w\a\b\y\y\y\l\6\d\3\h\h\9\0\8\e\f\9\6\s\k\5\z\q\r\3\p\3\3\s\3\r\0\z\s\0\t\m\n\c\s\t\q\j\t\y\n\9\1\x\9\f\v\0\s\y\h\9\g\1\h\1\g\x\0\4\m\5\0\9\e\x\a\e\c\q\a\j\q\g\q\p\y\0\o\1\3\r\7\p\c\s\e\u\v\b\h\z\7\e\g\c\p\z\8\p\x\3\j\4\s\u\1\l\l\j\p\i\j\p\5\g\6\u\b\x\8\4\4\a\e\r\0\h\4\4\0\x\8\3\c\s\6\4\1\o\7\m\5\n\w\0\c\y\s\a\8\v\r\p\b\m\w\o\q\u\x\j\z\g\m\5\2\b\o\3\1\v\d\a\6\o\3\y\r\h\3\x\4\h\b\d\3\r\i\l\p\r\o\2\p\r\x\0\s\e\j\o\u\u\4\9\g\v\z\c\h\c\h\e\j\7\g\3\g\4\w\1\m\1\6\w\r\v\s\b\u\z\v\x\b\9\k\7\k\j\4\s\9\m\v\m\3\h\0\x\0\p\o\e\r\8\t\1\0\8\l\g\d\5\4\q\v\h\h\p\h\y\e\2\a\n\x\w\w\q\9\q\7\2\j\p\b\g\q\2\w\c\l\n\a\1\0\o\m\q\a\9\c\2\q\v\i\1\f\b\o\b\p\x\7\m\i\0\u\f\7\5\4\a\j\t\l\2\e\9\9\9\8\q\i\g\6\9\1\c\5\2\k\k\1\g\b\k\0\6\h\6\o\4\3\f\t\8\k\w\z\u\h\1\r\7\0\j\w\d\3\3\p\l\m\r\8\5\0\s\b\3\i\s\0\e\c\n\d\d\o\c\n\1\f\3\n\4\u\4\e\g\b\4\d\a\a\4\9\x\i\d\7\i\f\k\f\7\3\9\4\e\j\c\5\p\u\v\w\5\j\l\4\g\i\5\n\v\c\c\k\8\y\8\3\d\b\a\k\e\7\i\9\s\r\a\8\4\y\k\s\h\2\a\1\b\w\b\a\i\t\q\f\4\x\f\w\q\f\y\2\j\z\6\b\y\d\s\7\t\c\b\k\r\r\5\3\u\5\j\w\9\9\5\4\p\i\4\d\q\r\l\1\w\n\j\4\t\s\1\3\p\i\2\5\5\f\q\h\6\3\o\e\a\l\3\1\s\6\d\5\g\4\u\e\t\p\c\q\z\f\6\c\0\2\r\z\r\s\7\s\h\3\g\b\a\2\z\3\2\m\q\e\t\v\s\2\z\2\e\n\o\t\b\w\m\4\4\3\s\o\h\l\0\v\2\v\t\m\s ]] 00:07:50.108 00:07:50.108 real 0m0.968s 00:07:50.108 user 0m0.630s 00:07:50.108 sys 0m0.206s 00:07:50.108 06:32:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.108 06:32:45 -- common/autotest_common.sh@10 -- # set +x 00:07:50.108 06:32:45 -- dd/basic_rw.sh@1 -- # cleanup 00:07:50.108 06:32:45 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:50.108 06:32:45 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:50.108 06:32:45 -- dd/common.sh@11 -- # local nvme_ref= 00:07:50.108 06:32:45 -- dd/common.sh@12 -- # local size=0xffff 00:07:50.108 06:32:45 -- dd/common.sh@14 -- # local bs=1048576 00:07:50.108 06:32:45 -- dd/common.sh@15 -- # local count=1 00:07:50.108 06:32:45 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:50.108 06:32:45 -- dd/common.sh@18 -- # gen_conf 00:07:50.108 06:32:45 -- dd/common.sh@31 -- # xtrace_disable 00:07:50.108 06:32:45 -- common/autotest_common.sh@10 -- # set +x 00:07:50.108 [2024-12-05 06:32:45.431425] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:50.108 [2024-12-05 06:32:45.431510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69759 ] 00:07:50.108 { 00:07:50.108 "subsystems": [ 00:07:50.108 { 00:07:50.108 "subsystem": "bdev", 00:07:50.108 "config": [ 00:07:50.108 { 00:07:50.108 "params": { 00:07:50.108 "trtype": "pcie", 00:07:50.108 "traddr": "0000:00:06.0", 00:07:50.108 "name": "Nvme0" 00:07:50.108 }, 00:07:50.108 "method": "bdev_nvme_attach_controller" 00:07:50.108 }, 00:07:50.108 { 00:07:50.108 "method": "bdev_wait_for_examine" 00:07:50.108 } 00:07:50.108 ] 00:07:50.108 } 00:07:50.108 ] 00:07:50.108 } 00:07:50.108 [2024-12-05 06:32:45.561420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.366 [2024-12-05 06:32:45.592269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.366  [2024-12-05T06:32:46.091Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:50.625 00:07:50.625 06:32:45 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.625 ************************************ 00:07:50.625 END TEST spdk_dd_basic_rw 00:07:50.625 ************************************ 00:07:50.625 00:07:50.625 real 0m13.900s 00:07:50.625 user 0m9.806s 00:07:50.625 sys 0m2.643s 00:07:50.625 06:32:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.625 06:32:45 -- common/autotest_common.sh@10 -- # set +x 00:07:50.625 06:32:45 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:50.625 06:32:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:50.625 06:32:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.625 06:32:45 -- common/autotest_common.sh@10 -- # set +x 00:07:50.625 ************************************ 00:07:50.625 START TEST spdk_dd_posix 00:07:50.625 ************************************ 00:07:50.625 06:32:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:50.625 * Looking for test storage... 00:07:50.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:50.625 06:32:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:50.626 06:32:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:50.626 06:32:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:50.626 06:32:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:50.626 06:32:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:50.626 06:32:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:50.626 06:32:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:50.626 06:32:46 -- scripts/common.sh@335 -- # IFS=.-: 00:07:50.626 06:32:46 -- scripts/common.sh@335 -- # read -ra ver1 00:07:50.626 06:32:46 -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.626 06:32:46 -- scripts/common.sh@336 -- # read -ra ver2 00:07:50.626 06:32:46 -- scripts/common.sh@337 -- # local 'op=<' 00:07:50.626 06:32:46 -- scripts/common.sh@339 -- # ver1_l=2 00:07:50.626 06:32:46 -- scripts/common.sh@340 -- # ver2_l=1 00:07:50.626 06:32:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:50.626 06:32:46 -- scripts/common.sh@343 -- # case "$op" in 00:07:50.626 06:32:46 -- scripts/common.sh@344 -- # : 1 00:07:50.626 06:32:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:50.626 06:32:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.626 06:32:46 -- scripts/common.sh@364 -- # decimal 1 00:07:50.626 06:32:46 -- scripts/common.sh@352 -- # local d=1 00:07:50.626 06:32:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.626 06:32:46 -- scripts/common.sh@354 -- # echo 1 00:07:50.626 06:32:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:50.626 06:32:46 -- scripts/common.sh@365 -- # decimal 2 00:07:50.626 06:32:46 -- scripts/common.sh@352 -- # local d=2 00:07:50.626 06:32:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.626 06:32:46 -- scripts/common.sh@354 -- # echo 2 00:07:50.626 06:32:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:50.626 06:32:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:50.626 06:32:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:50.626 06:32:46 -- scripts/common.sh@367 -- # return 0 00:07:50.626 06:32:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.626 06:32:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:50.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.626 --rc genhtml_branch_coverage=1 00:07:50.626 --rc genhtml_function_coverage=1 00:07:50.626 --rc genhtml_legend=1 00:07:50.626 --rc geninfo_all_blocks=1 00:07:50.626 --rc geninfo_unexecuted_blocks=1 00:07:50.626 00:07:50.626 ' 00:07:50.626 06:32:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:50.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.626 --rc genhtml_branch_coverage=1 00:07:50.626 --rc genhtml_function_coverage=1 00:07:50.626 --rc genhtml_legend=1 00:07:50.626 --rc geninfo_all_blocks=1 00:07:50.626 --rc geninfo_unexecuted_blocks=1 00:07:50.626 00:07:50.626 ' 00:07:50.626 06:32:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:50.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.626 --rc genhtml_branch_coverage=1 00:07:50.626 --rc genhtml_function_coverage=1 00:07:50.626 --rc genhtml_legend=1 00:07:50.626 --rc geninfo_all_blocks=1 00:07:50.626 --rc geninfo_unexecuted_blocks=1 00:07:50.626 00:07:50.626 ' 00:07:50.626 06:32:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:50.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.626 --rc genhtml_branch_coverage=1 00:07:50.626 --rc genhtml_function_coverage=1 00:07:50.626 --rc genhtml_legend=1 00:07:50.626 --rc geninfo_all_blocks=1 00:07:50.626 --rc geninfo_unexecuted_blocks=1 00:07:50.626 00:07:50.626 ' 00:07:50.626 06:32:46 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.626 06:32:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.626 06:32:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.626 06:32:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.626 06:32:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.626 06:32:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.626 06:32:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.626 06:32:46 -- paths/export.sh@5 -- # export PATH 00:07:50.626 06:32:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.626 06:32:46 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:50.885 06:32:46 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:50.885 06:32:46 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:50.885 06:32:46 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:50.885 06:32:46 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.885 06:32:46 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.885 06:32:46 -- dd/posix.sh@130 -- # tests 00:07:50.885 06:32:46 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:50.885 * First test run, liburing in use 00:07:50.885 06:32:46 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:50.885 06:32:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:50.885 06:32:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.885 06:32:46 -- common/autotest_common.sh@10 -- # set +x 00:07:50.885 ************************************ 00:07:50.885 START TEST dd_flag_append 00:07:50.885 ************************************ 00:07:50.885 06:32:46 -- common/autotest_common.sh@1114 -- # append 00:07:50.885 06:32:46 -- dd/posix.sh@16 -- # local dump0 00:07:50.885 06:32:46 -- dd/posix.sh@17 -- # local dump1 00:07:50.885 06:32:46 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:50.885 06:32:46 -- dd/common.sh@98 -- # xtrace_disable 00:07:50.885 06:32:46 -- common/autotest_common.sh@10 -- # set +x 00:07:50.885 06:32:46 -- dd/posix.sh@19 -- # dump0=jrctplnidmygequaiht7pqe2n6hrm1kd 00:07:50.885 06:32:46 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:50.885 06:32:46 -- dd/common.sh@98 -- # xtrace_disable 00:07:50.885 06:32:46 -- common/autotest_common.sh@10 -- # set +x 00:07:50.885 06:32:46 -- dd/posix.sh@20 -- # dump1=k9gm2qsvr0nvc4d5wssuvceyvmhodv2q 00:07:50.885 06:32:46 -- dd/posix.sh@22 -- # printf %s jrctplnidmygequaiht7pqe2n6hrm1kd 00:07:50.885 06:32:46 -- dd/posix.sh@23 -- # printf %s k9gm2qsvr0nvc4d5wssuvceyvmhodv2q 00:07:50.885 06:32:46 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:50.885 [2024-12-05 06:32:46.158908] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:50.885 [2024-12-05 06:32:46.159175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69829 ] 00:07:50.885 [2024-12-05 06:32:46.295365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.885 [2024-12-05 06:32:46.324991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.145  [2024-12-05T06:32:46.611Z] Copying: 32/32 [B] (average 31 kBps) 00:07:51.145 00:07:51.145 06:32:46 -- dd/posix.sh@27 -- # [[ k9gm2qsvr0nvc4d5wssuvceyvmhodv2qjrctplnidmygequaiht7pqe2n6hrm1kd == \k\9\g\m\2\q\s\v\r\0\n\v\c\4\d\5\w\s\s\u\v\c\e\y\v\m\h\o\d\v\2\q\j\r\c\t\p\l\n\i\d\m\y\g\e\q\u\a\i\h\t\7\p\q\e\2\n\6\h\r\m\1\k\d ]] 00:07:51.145 00:07:51.145 real 0m0.405s 00:07:51.145 user 0m0.191s 00:07:51.145 sys 0m0.093s 00:07:51.145 06:32:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.145 06:32:46 -- common/autotest_common.sh@10 -- # set +x 00:07:51.145 ************************************ 00:07:51.145 END TEST dd_flag_append 00:07:51.145 ************************************ 00:07:51.145 06:32:46 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:51.145 06:32:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:51.145 06:32:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.145 06:32:46 -- common/autotest_common.sh@10 -- # set +x 00:07:51.145 ************************************ 00:07:51.145 START TEST dd_flag_directory 00:07:51.145 ************************************ 00:07:51.145 06:32:46 -- common/autotest_common.sh@1114 -- # directory 00:07:51.145 06:32:46 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:51.145 06:32:46 -- common/autotest_common.sh@650 -- # local es=0 00:07:51.145 06:32:46 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:51.145 06:32:46 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.145 06:32:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.145 06:32:46 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.145 06:32:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.145 06:32:46 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.145 06:32:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.145 06:32:46 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.145 06:32:46 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.145 06:32:46 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:51.145 [2024-12-05 06:32:46.606558] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:51.145 [2024-12-05 06:32:46.606836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69850 ] 00:07:51.404 [2024-12-05 06:32:46.742950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.404 [2024-12-05 06:32:46.773045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.404 [2024-12-05 06:32:46.813315] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:51.404 [2024-12-05 06:32:46.813395] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:51.404 [2024-12-05 06:32:46.813410] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.663 [2024-12-05 06:32:46.870458] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:51.663 06:32:46 -- common/autotest_common.sh@653 -- # es=236 00:07:51.663 06:32:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:51.663 06:32:46 -- common/autotest_common.sh@662 -- # es=108 00:07:51.663 06:32:46 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:51.663 06:32:46 -- common/autotest_common.sh@670 -- # es=1 00:07:51.663 06:32:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:51.663 06:32:46 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:51.663 06:32:46 -- common/autotest_common.sh@650 -- # local es=0 00:07:51.663 06:32:46 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:51.663 06:32:46 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.663 06:32:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.663 06:32:46 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.663 06:32:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.663 06:32:46 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.663 06:32:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.663 06:32:46 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.663 06:32:46 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.663 06:32:46 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:51.663 [2024-12-05 06:32:46.967598] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:51.663 [2024-12-05 06:32:46.967683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69859 ] 00:07:51.663 [2024-12-05 06:32:47.098460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.922 [2024-12-05 06:32:47.130498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.922 [2024-12-05 06:32:47.170211] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:51.922 [2024-12-05 06:32:47.170260] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:51.922 [2024-12-05 06:32:47.170288] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.922 [2024-12-05 06:32:47.223574] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:51.922 06:32:47 -- common/autotest_common.sh@653 -- # es=236 00:07:51.922 06:32:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:51.922 06:32:47 -- common/autotest_common.sh@662 -- # es=108 00:07:51.922 06:32:47 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:51.922 06:32:47 -- common/autotest_common.sh@670 -- # es=1 00:07:51.922 06:32:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:51.922 00:07:51.922 real 0m0.725s 00:07:51.922 user 0m0.357s 00:07:51.922 sys 0m0.160s 00:07:51.922 06:32:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.922 ************************************ 00:07:51.922 END TEST dd_flag_directory 00:07:51.922 ************************************ 00:07:51.922 06:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:51.922 06:32:47 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:51.922 06:32:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:51.922 06:32:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.922 06:32:47 -- common/autotest_common.sh@10 -- # set +x 00:07:51.922 ************************************ 00:07:51.922 START TEST dd_flag_nofollow 00:07:51.922 ************************************ 00:07:51.922 06:32:47 -- common/autotest_common.sh@1114 -- # nofollow 00:07:51.922 06:32:47 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:51.922 06:32:47 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:51.922 06:32:47 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:51.922 06:32:47 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:51.922 06:32:47 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.922 06:32:47 -- common/autotest_common.sh@650 -- # local es=0 00:07:51.922 06:32:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.922 06:32:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.922 06:32:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.922 06:32:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.922 06:32:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.922 06:32:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.922 06:32:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.922 06:32:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.922 06:32:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.922 06:32:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.181 [2024-12-05 06:32:47.386677] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:52.181 [2024-12-05 06:32:47.386782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69888 ] 00:07:52.181 [2024-12-05 06:32:47.523231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.181 [2024-12-05 06:32:47.553039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.181 [2024-12-05 06:32:47.592292] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:52.181 [2024-12-05 06:32:47.592358] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:52.181 [2024-12-05 06:32:47.592387] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.181 [2024-12-05 06:32:47.645625] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:52.440 06:32:47 -- common/autotest_common.sh@653 -- # es=216 00:07:52.440 06:32:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:52.440 06:32:47 -- common/autotest_common.sh@662 -- # es=88 00:07:52.440 06:32:47 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:52.440 06:32:47 -- common/autotest_common.sh@670 -- # es=1 00:07:52.440 06:32:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:52.440 06:32:47 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:52.440 06:32:47 -- common/autotest_common.sh@650 -- # local es=0 00:07:52.440 06:32:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:52.440 06:32:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.440 06:32:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.440 06:32:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.440 06:32:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.440 06:32:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.440 06:32:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.440 06:32:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.440 06:32:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:52.440 06:32:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:52.441 [2024-12-05 06:32:47.752532] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:52.441 [2024-12-05 06:32:47.752625] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69892 ] 00:07:52.441 [2024-12-05 06:32:47.887026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.699 [2024-12-05 06:32:47.916982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.699 [2024-12-05 06:32:47.957665] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:52.699 [2024-12-05 06:32:47.957739] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:52.699 [2024-12-05 06:32:47.957768] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.699 [2024-12-05 06:32:48.015987] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:52.699 06:32:48 -- common/autotest_common.sh@653 -- # es=216 00:07:52.699 06:32:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:52.699 06:32:48 -- common/autotest_common.sh@662 -- # es=88 00:07:52.699 06:32:48 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:52.699 06:32:48 -- common/autotest_common.sh@670 -- # es=1 00:07:52.699 06:32:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:52.699 06:32:48 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:52.699 06:32:48 -- dd/common.sh@98 -- # xtrace_disable 00:07:52.699 06:32:48 -- common/autotest_common.sh@10 -- # set +x 00:07:52.699 06:32:48 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.699 [2024-12-05 06:32:48.126088] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:52.699 [2024-12-05 06:32:48.126386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69900 ] 00:07:52.958 [2024-12-05 06:32:48.262214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.958 [2024-12-05 06:32:48.291531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.958  [2024-12-05T06:32:48.683Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.217 00:07:53.217 06:32:48 -- dd/posix.sh@49 -- # [[ nuri8khncxc7815fs2bxb42x8b1f32xdjh72ub0ctl6muhld255xdq0ysb4wd60j557191ons3f08bvx48wxqlw5h5fsq2nj5rmpk7o6jbcg45riqwvb8ws6locriwvkma1omu90nyq34wmnlyv1ikrd9ye5sa66h0g6965kia115do6ccwdmeixkh9hg0vyvcfcegpwqnwf37gnj8yq1utbxcsa8v235q3y3l08vpje0m76bqeus6youhsf5dbxb4hi9nulu2msq9srkvg2469i47i9v7739612hyt464nwww8gqm68kyapgakp62botyr6jzw9ze2vby1dmek18ym2fr1d69d7rfnb3t3aknks2pyy2anlu975l8wx92alcm2ae5wtgtdrlfd7fgdux6n3lm0cll0n950snlavaiapek478ylbvkmi3z9xvgj028rv6bmmdfqlqbng50elhi23x8n2v0a4x38aucs1nmuz175v9fii3zsg7ewslhov == \n\u\r\i\8\k\h\n\c\x\c\7\8\1\5\f\s\2\b\x\b\4\2\x\8\b\1\f\3\2\x\d\j\h\7\2\u\b\0\c\t\l\6\m\u\h\l\d\2\5\5\x\d\q\0\y\s\b\4\w\d\6\0\j\5\5\7\1\9\1\o\n\s\3\f\0\8\b\v\x\4\8\w\x\q\l\w\5\h\5\f\s\q\2\n\j\5\r\m\p\k\7\o\6\j\b\c\g\4\5\r\i\q\w\v\b\8\w\s\6\l\o\c\r\i\w\v\k\m\a\1\o\m\u\9\0\n\y\q\3\4\w\m\n\l\y\v\1\i\k\r\d\9\y\e\5\s\a\6\6\h\0\g\6\9\6\5\k\i\a\1\1\5\d\o\6\c\c\w\d\m\e\i\x\k\h\9\h\g\0\v\y\v\c\f\c\e\g\p\w\q\n\w\f\3\7\g\n\j\8\y\q\1\u\t\b\x\c\s\a\8\v\2\3\5\q\3\y\3\l\0\8\v\p\j\e\0\m\7\6\b\q\e\u\s\6\y\o\u\h\s\f\5\d\b\x\b\4\h\i\9\n\u\l\u\2\m\s\q\9\s\r\k\v\g\2\4\6\9\i\4\7\i\9\v\7\7\3\9\6\1\2\h\y\t\4\6\4\n\w\w\w\8\g\q\m\6\8\k\y\a\p\g\a\k\p\6\2\b\o\t\y\r\6\j\z\w\9\z\e\2\v\b\y\1\d\m\e\k\1\8\y\m\2\f\r\1\d\6\9\d\7\r\f\n\b\3\t\3\a\k\n\k\s\2\p\y\y\2\a\n\l\u\9\7\5\l\8\w\x\9\2\a\l\c\m\2\a\e\5\w\t\g\t\d\r\l\f\d\7\f\g\d\u\x\6\n\3\l\m\0\c\l\l\0\n\9\5\0\s\n\l\a\v\a\i\a\p\e\k\4\7\8\y\l\b\v\k\m\i\3\z\9\x\v\g\j\0\2\8\r\v\6\b\m\m\d\f\q\l\q\b\n\g\5\0\e\l\h\i\2\3\x\8\n\2\v\0\a\4\x\3\8\a\u\c\s\1\n\m\u\z\1\7\5\v\9\f\i\i\3\z\s\g\7\e\w\s\l\h\o\v ]] 00:07:53.217 00:07:53.217 real 0m1.141s 00:07:53.217 user 0m0.550s 00:07:53.217 sys 0m0.264s 00:07:53.217 06:32:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.217 06:32:48 -- common/autotest_common.sh@10 -- # set +x 00:07:53.217 ************************************ 00:07:53.217 END TEST dd_flag_nofollow 00:07:53.217 ************************************ 00:07:53.217 06:32:48 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:53.217 06:32:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:53.217 06:32:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.217 06:32:48 -- common/autotest_common.sh@10 -- # set +x 00:07:53.217 ************************************ 00:07:53.217 START TEST dd_flag_noatime 00:07:53.217 ************************************ 00:07:53.217 06:32:48 -- common/autotest_common.sh@1114 -- # noatime 00:07:53.217 06:32:48 -- dd/posix.sh@53 -- # local atime_if 00:07:53.217 06:32:48 -- dd/posix.sh@54 -- # local atime_of 00:07:53.217 06:32:48 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:53.217 06:32:48 -- dd/common.sh@98 -- # xtrace_disable 00:07:53.217 06:32:48 -- common/autotest_common.sh@10 -- # set +x 00:07:53.217 06:32:48 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:53.217 06:32:48 -- dd/posix.sh@60 -- # atime_if=1733380368 00:07:53.217 06:32:48 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:53.217 06:32:48 -- dd/posix.sh@61 -- # atime_of=1733380368 00:07:53.217 06:32:48 -- dd/posix.sh@66 -- # sleep 1 00:07:54.153 06:32:49 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.153 [2024-12-05 06:32:49.596200] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:54.153 [2024-12-05 06:32:49.596528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69940 ] 00:07:54.411 [2024-12-05 06:32:49.735026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.411 [2024-12-05 06:32:49.773814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.411  [2024-12-05T06:32:50.136Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.670 00:07:54.670 06:32:49 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:54.670 06:32:49 -- dd/posix.sh@69 -- # (( atime_if == 1733380368 )) 00:07:54.670 06:32:49 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.670 06:32:49 -- dd/posix.sh@70 -- # (( atime_of == 1733380368 )) 00:07:54.670 06:32:49 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.670 [2024-12-05 06:32:50.032925] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:54.670 [2024-12-05 06:32:50.033247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69947 ] 00:07:54.928 [2024-12-05 06:32:50.170452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.928 [2024-12-05 06:32:50.199851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.928  [2024-12-05T06:32:50.394Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.928 00:07:54.928 06:32:50 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:55.187 ************************************ 00:07:55.187 END TEST dd_flag_noatime 00:07:55.187 ************************************ 00:07:55.187 06:32:50 -- dd/posix.sh@73 -- # (( atime_if < 1733380370 )) 00:07:55.187 00:07:55.187 real 0m1.868s 00:07:55.187 user 0m0.419s 00:07:55.187 sys 0m0.206s 00:07:55.187 06:32:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:55.187 06:32:50 -- common/autotest_common.sh@10 -- # set +x 00:07:55.187 06:32:50 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:55.187 06:32:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:55.187 06:32:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.187 06:32:50 -- common/autotest_common.sh@10 -- # set +x 00:07:55.187 ************************************ 00:07:55.187 START TEST dd_flags_misc 00:07:55.187 ************************************ 00:07:55.187 06:32:50 -- common/autotest_common.sh@1114 -- # io 00:07:55.187 06:32:50 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:55.187 06:32:50 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:55.187 06:32:50 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:55.187 06:32:50 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:55.187 06:32:50 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:55.187 06:32:50 -- dd/common.sh@98 -- # xtrace_disable 00:07:55.187 06:32:50 -- common/autotest_common.sh@10 -- # set +x 00:07:55.187 06:32:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.187 06:32:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:55.187 [2024-12-05 06:32:50.500994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:55.187 [2024-12-05 06:32:50.501090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69978 ] 00:07:55.187 [2024-12-05 06:32:50.638224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.445 [2024-12-05 06:32:50.668467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.446  [2024-12-05T06:32:50.912Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.446 00:07:55.446 06:32:50 -- dd/posix.sh@93 -- # [[ ar3n9zsy1tfy0196u88egu2tjr7qkcfm0d4yij9rhbx17o0z3wpdjr5739chkszxjbpo259jzgybc5k7kqjlhqf3k0pkaq2ssi0rqr47trgs5t0gvqi6udbyz45sfqdih2owhmnui0woqgc8jzae6kluv08928676dsxtek1wjpo2hd1hl4ar4tzdescla1jsiiaqz2hr9veue74er467v20hdilvndnzu0eh3kzpv6elvvikjj3g6vqphqsf60g77biqpl89txpdf8mkb1u6jfte4p4wsdez3gabtq4si3hnbe8i190sv0vnhry7w2s52r2v21cnkuoib4pjpfthqb9b5ypwpyswh4xcofu6dcshaclbququ2d5rtqj2ayzfhaa9flg1wb1gkkqzzsykuwsr9eaqj38wvbjiuhc78knserz3orlipq2y19aqb2ikvhkzzze3wczsp2t8zj7gbtpgcae3eif30og5xz68ql3vxmj0nl86g3zyrx0a5pe == \a\r\3\n\9\z\s\y\1\t\f\y\0\1\9\6\u\8\8\e\g\u\2\t\j\r\7\q\k\c\f\m\0\d\4\y\i\j\9\r\h\b\x\1\7\o\0\z\3\w\p\d\j\r\5\7\3\9\c\h\k\s\z\x\j\b\p\o\2\5\9\j\z\g\y\b\c\5\k\7\k\q\j\l\h\q\f\3\k\0\p\k\a\q\2\s\s\i\0\r\q\r\4\7\t\r\g\s\5\t\0\g\v\q\i\6\u\d\b\y\z\4\5\s\f\q\d\i\h\2\o\w\h\m\n\u\i\0\w\o\q\g\c\8\j\z\a\e\6\k\l\u\v\0\8\9\2\8\6\7\6\d\s\x\t\e\k\1\w\j\p\o\2\h\d\1\h\l\4\a\r\4\t\z\d\e\s\c\l\a\1\j\s\i\i\a\q\z\2\h\r\9\v\e\u\e\7\4\e\r\4\6\7\v\2\0\h\d\i\l\v\n\d\n\z\u\0\e\h\3\k\z\p\v\6\e\l\v\v\i\k\j\j\3\g\6\v\q\p\h\q\s\f\6\0\g\7\7\b\i\q\p\l\8\9\t\x\p\d\f\8\m\k\b\1\u\6\j\f\t\e\4\p\4\w\s\d\e\z\3\g\a\b\t\q\4\s\i\3\h\n\b\e\8\i\1\9\0\s\v\0\v\n\h\r\y\7\w\2\s\5\2\r\2\v\2\1\c\n\k\u\o\i\b\4\p\j\p\f\t\h\q\b\9\b\5\y\p\w\p\y\s\w\h\4\x\c\o\f\u\6\d\c\s\h\a\c\l\b\q\u\q\u\2\d\5\r\t\q\j\2\a\y\z\f\h\a\a\9\f\l\g\1\w\b\1\g\k\k\q\z\z\s\y\k\u\w\s\r\9\e\a\q\j\3\8\w\v\b\j\i\u\h\c\7\8\k\n\s\e\r\z\3\o\r\l\i\p\q\2\y\1\9\a\q\b\2\i\k\v\h\k\z\z\z\e\3\w\c\z\s\p\2\t\8\z\j\7\g\b\t\p\g\c\a\e\3\e\i\f\3\0\o\g\5\x\z\6\8\q\l\3\v\x\m\j\0\n\l\8\6\g\3\z\y\r\x\0\a\5\p\e ]] 00:07:55.446 06:32:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.446 06:32:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:55.446 [2024-12-05 06:32:50.895595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:55.446 [2024-12-05 06:32:50.895696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69980 ] 00:07:55.705 [2024-12-05 06:32:51.028130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.705 [2024-12-05 06:32:51.057533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.705  [2024-12-05T06:32:51.432Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.966 00:07:55.966 06:32:51 -- dd/posix.sh@93 -- # [[ ar3n9zsy1tfy0196u88egu2tjr7qkcfm0d4yij9rhbx17o0z3wpdjr5739chkszxjbpo259jzgybc5k7kqjlhqf3k0pkaq2ssi0rqr47trgs5t0gvqi6udbyz45sfqdih2owhmnui0woqgc8jzae6kluv08928676dsxtek1wjpo2hd1hl4ar4tzdescla1jsiiaqz2hr9veue74er467v20hdilvndnzu0eh3kzpv6elvvikjj3g6vqphqsf60g77biqpl89txpdf8mkb1u6jfte4p4wsdez3gabtq4si3hnbe8i190sv0vnhry7w2s52r2v21cnkuoib4pjpfthqb9b5ypwpyswh4xcofu6dcshaclbququ2d5rtqj2ayzfhaa9flg1wb1gkkqzzsykuwsr9eaqj38wvbjiuhc78knserz3orlipq2y19aqb2ikvhkzzze3wczsp2t8zj7gbtpgcae3eif30og5xz68ql3vxmj0nl86g3zyrx0a5pe == \a\r\3\n\9\z\s\y\1\t\f\y\0\1\9\6\u\8\8\e\g\u\2\t\j\r\7\q\k\c\f\m\0\d\4\y\i\j\9\r\h\b\x\1\7\o\0\z\3\w\p\d\j\r\5\7\3\9\c\h\k\s\z\x\j\b\p\o\2\5\9\j\z\g\y\b\c\5\k\7\k\q\j\l\h\q\f\3\k\0\p\k\a\q\2\s\s\i\0\r\q\r\4\7\t\r\g\s\5\t\0\g\v\q\i\6\u\d\b\y\z\4\5\s\f\q\d\i\h\2\o\w\h\m\n\u\i\0\w\o\q\g\c\8\j\z\a\e\6\k\l\u\v\0\8\9\2\8\6\7\6\d\s\x\t\e\k\1\w\j\p\o\2\h\d\1\h\l\4\a\r\4\t\z\d\e\s\c\l\a\1\j\s\i\i\a\q\z\2\h\r\9\v\e\u\e\7\4\e\r\4\6\7\v\2\0\h\d\i\l\v\n\d\n\z\u\0\e\h\3\k\z\p\v\6\e\l\v\v\i\k\j\j\3\g\6\v\q\p\h\q\s\f\6\0\g\7\7\b\i\q\p\l\8\9\t\x\p\d\f\8\m\k\b\1\u\6\j\f\t\e\4\p\4\w\s\d\e\z\3\g\a\b\t\q\4\s\i\3\h\n\b\e\8\i\1\9\0\s\v\0\v\n\h\r\y\7\w\2\s\5\2\r\2\v\2\1\c\n\k\u\o\i\b\4\p\j\p\f\t\h\q\b\9\b\5\y\p\w\p\y\s\w\h\4\x\c\o\f\u\6\d\c\s\h\a\c\l\b\q\u\q\u\2\d\5\r\t\q\j\2\a\y\z\f\h\a\a\9\f\l\g\1\w\b\1\g\k\k\q\z\z\s\y\k\u\w\s\r\9\e\a\q\j\3\8\w\v\b\j\i\u\h\c\7\8\k\n\s\e\r\z\3\o\r\l\i\p\q\2\y\1\9\a\q\b\2\i\k\v\h\k\z\z\z\e\3\w\c\z\s\p\2\t\8\z\j\7\g\b\t\p\g\c\a\e\3\e\i\f\3\0\o\g\5\x\z\6\8\q\l\3\v\x\m\j\0\n\l\8\6\g\3\z\y\r\x\0\a\5\p\e ]] 00:07:55.966 06:32:51 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.966 06:32:51 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:55.966 [2024-12-05 06:32:51.271376] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:55.966 [2024-12-05 06:32:51.271472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69988 ] 00:07:55.966 [2024-12-05 06:32:51.407544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.251 [2024-12-05 06:32:51.437766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.251  [2024-12-05T06:32:51.717Z] Copying: 512/512 [B] (average 500 kBps) 00:07:56.251 00:07:56.251 06:32:51 -- dd/posix.sh@93 -- # [[ ar3n9zsy1tfy0196u88egu2tjr7qkcfm0d4yij9rhbx17o0z3wpdjr5739chkszxjbpo259jzgybc5k7kqjlhqf3k0pkaq2ssi0rqr47trgs5t0gvqi6udbyz45sfqdih2owhmnui0woqgc8jzae6kluv08928676dsxtek1wjpo2hd1hl4ar4tzdescla1jsiiaqz2hr9veue74er467v20hdilvndnzu0eh3kzpv6elvvikjj3g6vqphqsf60g77biqpl89txpdf8mkb1u6jfte4p4wsdez3gabtq4si3hnbe8i190sv0vnhry7w2s52r2v21cnkuoib4pjpfthqb9b5ypwpyswh4xcofu6dcshaclbququ2d5rtqj2ayzfhaa9flg1wb1gkkqzzsykuwsr9eaqj38wvbjiuhc78knserz3orlipq2y19aqb2ikvhkzzze3wczsp2t8zj7gbtpgcae3eif30og5xz68ql3vxmj0nl86g3zyrx0a5pe == \a\r\3\n\9\z\s\y\1\t\f\y\0\1\9\6\u\8\8\e\g\u\2\t\j\r\7\q\k\c\f\m\0\d\4\y\i\j\9\r\h\b\x\1\7\o\0\z\3\w\p\d\j\r\5\7\3\9\c\h\k\s\z\x\j\b\p\o\2\5\9\j\z\g\y\b\c\5\k\7\k\q\j\l\h\q\f\3\k\0\p\k\a\q\2\s\s\i\0\r\q\r\4\7\t\r\g\s\5\t\0\g\v\q\i\6\u\d\b\y\z\4\5\s\f\q\d\i\h\2\o\w\h\m\n\u\i\0\w\o\q\g\c\8\j\z\a\e\6\k\l\u\v\0\8\9\2\8\6\7\6\d\s\x\t\e\k\1\w\j\p\o\2\h\d\1\h\l\4\a\r\4\t\z\d\e\s\c\l\a\1\j\s\i\i\a\q\z\2\h\r\9\v\e\u\e\7\4\e\r\4\6\7\v\2\0\h\d\i\l\v\n\d\n\z\u\0\e\h\3\k\z\p\v\6\e\l\v\v\i\k\j\j\3\g\6\v\q\p\h\q\s\f\6\0\g\7\7\b\i\q\p\l\8\9\t\x\p\d\f\8\m\k\b\1\u\6\j\f\t\e\4\p\4\w\s\d\e\z\3\g\a\b\t\q\4\s\i\3\h\n\b\e\8\i\1\9\0\s\v\0\v\n\h\r\y\7\w\2\s\5\2\r\2\v\2\1\c\n\k\u\o\i\b\4\p\j\p\f\t\h\q\b\9\b\5\y\p\w\p\y\s\w\h\4\x\c\o\f\u\6\d\c\s\h\a\c\l\b\q\u\q\u\2\d\5\r\t\q\j\2\a\y\z\f\h\a\a\9\f\l\g\1\w\b\1\g\k\k\q\z\z\s\y\k\u\w\s\r\9\e\a\q\j\3\8\w\v\b\j\i\u\h\c\7\8\k\n\s\e\r\z\3\o\r\l\i\p\q\2\y\1\9\a\q\b\2\i\k\v\h\k\z\z\z\e\3\w\c\z\s\p\2\t\8\z\j\7\g\b\t\p\g\c\a\e\3\e\i\f\3\0\o\g\5\x\z\6\8\q\l\3\v\x\m\j\0\n\l\8\6\g\3\z\y\r\x\0\a\5\p\e ]] 00:07:56.251 06:32:51 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.251 06:32:51 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:56.251 [2024-12-05 06:32:51.657960] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:56.251 [2024-12-05 06:32:51.658218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69995 ] 00:07:56.515 [2024-12-05 06:32:51.792192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.515 [2024-12-05 06:32:51.821284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.515  [2024-12-05T06:32:52.240Z] Copying: 512/512 [B] (average 500 kBps) 00:07:56.774 00:07:56.774 06:32:51 -- dd/posix.sh@93 -- # [[ ar3n9zsy1tfy0196u88egu2tjr7qkcfm0d4yij9rhbx17o0z3wpdjr5739chkszxjbpo259jzgybc5k7kqjlhqf3k0pkaq2ssi0rqr47trgs5t0gvqi6udbyz45sfqdih2owhmnui0woqgc8jzae6kluv08928676dsxtek1wjpo2hd1hl4ar4tzdescla1jsiiaqz2hr9veue74er467v20hdilvndnzu0eh3kzpv6elvvikjj3g6vqphqsf60g77biqpl89txpdf8mkb1u6jfte4p4wsdez3gabtq4si3hnbe8i190sv0vnhry7w2s52r2v21cnkuoib4pjpfthqb9b5ypwpyswh4xcofu6dcshaclbququ2d5rtqj2ayzfhaa9flg1wb1gkkqzzsykuwsr9eaqj38wvbjiuhc78knserz3orlipq2y19aqb2ikvhkzzze3wczsp2t8zj7gbtpgcae3eif30og5xz68ql3vxmj0nl86g3zyrx0a5pe == \a\r\3\n\9\z\s\y\1\t\f\y\0\1\9\6\u\8\8\e\g\u\2\t\j\r\7\q\k\c\f\m\0\d\4\y\i\j\9\r\h\b\x\1\7\o\0\z\3\w\p\d\j\r\5\7\3\9\c\h\k\s\z\x\j\b\p\o\2\5\9\j\z\g\y\b\c\5\k\7\k\q\j\l\h\q\f\3\k\0\p\k\a\q\2\s\s\i\0\r\q\r\4\7\t\r\g\s\5\t\0\g\v\q\i\6\u\d\b\y\z\4\5\s\f\q\d\i\h\2\o\w\h\m\n\u\i\0\w\o\q\g\c\8\j\z\a\e\6\k\l\u\v\0\8\9\2\8\6\7\6\d\s\x\t\e\k\1\w\j\p\o\2\h\d\1\h\l\4\a\r\4\t\z\d\e\s\c\l\a\1\j\s\i\i\a\q\z\2\h\r\9\v\e\u\e\7\4\e\r\4\6\7\v\2\0\h\d\i\l\v\n\d\n\z\u\0\e\h\3\k\z\p\v\6\e\l\v\v\i\k\j\j\3\g\6\v\q\p\h\q\s\f\6\0\g\7\7\b\i\q\p\l\8\9\t\x\p\d\f\8\m\k\b\1\u\6\j\f\t\e\4\p\4\w\s\d\e\z\3\g\a\b\t\q\4\s\i\3\h\n\b\e\8\i\1\9\0\s\v\0\v\n\h\r\y\7\w\2\s\5\2\r\2\v\2\1\c\n\k\u\o\i\b\4\p\j\p\f\t\h\q\b\9\b\5\y\p\w\p\y\s\w\h\4\x\c\o\f\u\6\d\c\s\h\a\c\l\b\q\u\q\u\2\d\5\r\t\q\j\2\a\y\z\f\h\a\a\9\f\l\g\1\w\b\1\g\k\k\q\z\z\s\y\k\u\w\s\r\9\e\a\q\j\3\8\w\v\b\j\i\u\h\c\7\8\k\n\s\e\r\z\3\o\r\l\i\p\q\2\y\1\9\a\q\b\2\i\k\v\h\k\z\z\z\e\3\w\c\z\s\p\2\t\8\z\j\7\g\b\t\p\g\c\a\e\3\e\i\f\3\0\o\g\5\x\z\6\8\q\l\3\v\x\m\j\0\n\l\8\6\g\3\z\y\r\x\0\a\5\p\e ]] 00:07:56.774 06:32:51 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:56.774 06:32:51 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:56.774 06:32:51 -- dd/common.sh@98 -- # xtrace_disable 00:07:56.774 06:32:51 -- common/autotest_common.sh@10 -- # set +x 00:07:56.774 06:32:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.774 06:32:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:56.774 [2024-12-05 06:32:52.052151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:56.774 [2024-12-05 06:32:52.052438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69997 ] 00:07:56.774 [2024-12-05 06:32:52.187566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.774 [2024-12-05 06:32:52.216326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.034  [2024-12-05T06:32:52.500Z] Copying: 512/512 [B] (average 500 kBps) 00:07:57.034 00:07:57.035 06:32:52 -- dd/posix.sh@93 -- # [[ c82sn9w74v25w12rulg7b3x42k09oanjkoui6ip0lva09vxzrih2dgfi52lqu930s812vhxgx8pfzu9poxq6of13fv43dhn1yqivwg4l6rdttka1y161bf8gepw889a394ajp1lz84p6luku08mqjp48utgjrqxksw422q492upq3hnafgiiyryywpybf6rpl9morfr8xol4fp8g4tkqzphu4dnit3d4xtksgfapqhhilca82s8nfk4klx6bvvxsn3vtj9t7tykxw99yd10msajju1y1d0yg548nvj41w847iox8ullxxbpve9be1fn7xao0v0vwpw5hab95sxsvyyac5m7ena0luhzyqw7lj1gc9ld5ysrbsgsp2xbe2obzgth1ifuh3il1gxowfh6441xkbfcr9qrtcgg6n8dupp8h518c8egg95oo79d55luqu1jwhj8pfbv8lm0qb58p1drw4uixj864syhs07fqb906bdsnn52739ralxgybcv8 == \c\8\2\s\n\9\w\7\4\v\2\5\w\1\2\r\u\l\g\7\b\3\x\4\2\k\0\9\o\a\n\j\k\o\u\i\6\i\p\0\l\v\a\0\9\v\x\z\r\i\h\2\d\g\f\i\5\2\l\q\u\9\3\0\s\8\1\2\v\h\x\g\x\8\p\f\z\u\9\p\o\x\q\6\o\f\1\3\f\v\4\3\d\h\n\1\y\q\i\v\w\g\4\l\6\r\d\t\t\k\a\1\y\1\6\1\b\f\8\g\e\p\w\8\8\9\a\3\9\4\a\j\p\1\l\z\8\4\p\6\l\u\k\u\0\8\m\q\j\p\4\8\u\t\g\j\r\q\x\k\s\w\4\2\2\q\4\9\2\u\p\q\3\h\n\a\f\g\i\i\y\r\y\y\w\p\y\b\f\6\r\p\l\9\m\o\r\f\r\8\x\o\l\4\f\p\8\g\4\t\k\q\z\p\h\u\4\d\n\i\t\3\d\4\x\t\k\s\g\f\a\p\q\h\h\i\l\c\a\8\2\s\8\n\f\k\4\k\l\x\6\b\v\v\x\s\n\3\v\t\j\9\t\7\t\y\k\x\w\9\9\y\d\1\0\m\s\a\j\j\u\1\y\1\d\0\y\g\5\4\8\n\v\j\4\1\w\8\4\7\i\o\x\8\u\l\l\x\x\b\p\v\e\9\b\e\1\f\n\7\x\a\o\0\v\0\v\w\p\w\5\h\a\b\9\5\s\x\s\v\y\y\a\c\5\m\7\e\n\a\0\l\u\h\z\y\q\w\7\l\j\1\g\c\9\l\d\5\y\s\r\b\s\g\s\p\2\x\b\e\2\o\b\z\g\t\h\1\i\f\u\h\3\i\l\1\g\x\o\w\f\h\6\4\4\1\x\k\b\f\c\r\9\q\r\t\c\g\g\6\n\8\d\u\p\p\8\h\5\1\8\c\8\e\g\g\9\5\o\o\7\9\d\5\5\l\u\q\u\1\j\w\h\j\8\p\f\b\v\8\l\m\0\q\b\5\8\p\1\d\r\w\4\u\i\x\j\8\6\4\s\y\h\s\0\7\f\q\b\9\0\6\b\d\s\n\n\5\2\7\3\9\r\a\l\x\g\y\b\c\v\8 ]] 00:07:57.035 06:32:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:57.035 06:32:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:57.035 [2024-12-05 06:32:52.413907] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:57.035 [2024-12-05 06:32:52.413992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70010 ] 00:07:57.294 [2024-12-05 06:32:52.542431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.294 [2024-12-05 06:32:52.576396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.294  [2024-12-05T06:32:52.760Z] Copying: 512/512 [B] (average 500 kBps) 00:07:57.294 00:07:57.294 06:32:52 -- dd/posix.sh@93 -- # [[ c82sn9w74v25w12rulg7b3x42k09oanjkoui6ip0lva09vxzrih2dgfi52lqu930s812vhxgx8pfzu9poxq6of13fv43dhn1yqivwg4l6rdttka1y161bf8gepw889a394ajp1lz84p6luku08mqjp48utgjrqxksw422q492upq3hnafgiiyryywpybf6rpl9morfr8xol4fp8g4tkqzphu4dnit3d4xtksgfapqhhilca82s8nfk4klx6bvvxsn3vtj9t7tykxw99yd10msajju1y1d0yg548nvj41w847iox8ullxxbpve9be1fn7xao0v0vwpw5hab95sxsvyyac5m7ena0luhzyqw7lj1gc9ld5ysrbsgsp2xbe2obzgth1ifuh3il1gxowfh6441xkbfcr9qrtcgg6n8dupp8h518c8egg95oo79d55luqu1jwhj8pfbv8lm0qb58p1drw4uixj864syhs07fqb906bdsnn52739ralxgybcv8 == \c\8\2\s\n\9\w\7\4\v\2\5\w\1\2\r\u\l\g\7\b\3\x\4\2\k\0\9\o\a\n\j\k\o\u\i\6\i\p\0\l\v\a\0\9\v\x\z\r\i\h\2\d\g\f\i\5\2\l\q\u\9\3\0\s\8\1\2\v\h\x\g\x\8\p\f\z\u\9\p\o\x\q\6\o\f\1\3\f\v\4\3\d\h\n\1\y\q\i\v\w\g\4\l\6\r\d\t\t\k\a\1\y\1\6\1\b\f\8\g\e\p\w\8\8\9\a\3\9\4\a\j\p\1\l\z\8\4\p\6\l\u\k\u\0\8\m\q\j\p\4\8\u\t\g\j\r\q\x\k\s\w\4\2\2\q\4\9\2\u\p\q\3\h\n\a\f\g\i\i\y\r\y\y\w\p\y\b\f\6\r\p\l\9\m\o\r\f\r\8\x\o\l\4\f\p\8\g\4\t\k\q\z\p\h\u\4\d\n\i\t\3\d\4\x\t\k\s\g\f\a\p\q\h\h\i\l\c\a\8\2\s\8\n\f\k\4\k\l\x\6\b\v\v\x\s\n\3\v\t\j\9\t\7\t\y\k\x\w\9\9\y\d\1\0\m\s\a\j\j\u\1\y\1\d\0\y\g\5\4\8\n\v\j\4\1\w\8\4\7\i\o\x\8\u\l\l\x\x\b\p\v\e\9\b\e\1\f\n\7\x\a\o\0\v\0\v\w\p\w\5\h\a\b\9\5\s\x\s\v\y\y\a\c\5\m\7\e\n\a\0\l\u\h\z\y\q\w\7\l\j\1\g\c\9\l\d\5\y\s\r\b\s\g\s\p\2\x\b\e\2\o\b\z\g\t\h\1\i\f\u\h\3\i\l\1\g\x\o\w\f\h\6\4\4\1\x\k\b\f\c\r\9\q\r\t\c\g\g\6\n\8\d\u\p\p\8\h\5\1\8\c\8\e\g\g\9\5\o\o\7\9\d\5\5\l\u\q\u\1\j\w\h\j\8\p\f\b\v\8\l\m\0\q\b\5\8\p\1\d\r\w\4\u\i\x\j\8\6\4\s\y\h\s\0\7\f\q\b\9\0\6\b\d\s\n\n\5\2\7\3\9\r\a\l\x\g\y\b\c\v\8 ]] 00:07:57.294 06:32:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:57.294 06:32:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:57.554 [2024-12-05 06:32:52.778515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:57.554 [2024-12-05 06:32:52.778781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70012 ] 00:07:57.554 [2024-12-05 06:32:52.904282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.554 [2024-12-05 06:32:52.934340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.554  [2024-12-05T06:32:53.279Z] Copying: 512/512 [B] (average 166 kBps) 00:07:57.813 00:07:57.813 06:32:53 -- dd/posix.sh@93 -- # [[ c82sn9w74v25w12rulg7b3x42k09oanjkoui6ip0lva09vxzrih2dgfi52lqu930s812vhxgx8pfzu9poxq6of13fv43dhn1yqivwg4l6rdttka1y161bf8gepw889a394ajp1lz84p6luku08mqjp48utgjrqxksw422q492upq3hnafgiiyryywpybf6rpl9morfr8xol4fp8g4tkqzphu4dnit3d4xtksgfapqhhilca82s8nfk4klx6bvvxsn3vtj9t7tykxw99yd10msajju1y1d0yg548nvj41w847iox8ullxxbpve9be1fn7xao0v0vwpw5hab95sxsvyyac5m7ena0luhzyqw7lj1gc9ld5ysrbsgsp2xbe2obzgth1ifuh3il1gxowfh6441xkbfcr9qrtcgg6n8dupp8h518c8egg95oo79d55luqu1jwhj8pfbv8lm0qb58p1drw4uixj864syhs07fqb906bdsnn52739ralxgybcv8 == \c\8\2\s\n\9\w\7\4\v\2\5\w\1\2\r\u\l\g\7\b\3\x\4\2\k\0\9\o\a\n\j\k\o\u\i\6\i\p\0\l\v\a\0\9\v\x\z\r\i\h\2\d\g\f\i\5\2\l\q\u\9\3\0\s\8\1\2\v\h\x\g\x\8\p\f\z\u\9\p\o\x\q\6\o\f\1\3\f\v\4\3\d\h\n\1\y\q\i\v\w\g\4\l\6\r\d\t\t\k\a\1\y\1\6\1\b\f\8\g\e\p\w\8\8\9\a\3\9\4\a\j\p\1\l\z\8\4\p\6\l\u\k\u\0\8\m\q\j\p\4\8\u\t\g\j\r\q\x\k\s\w\4\2\2\q\4\9\2\u\p\q\3\h\n\a\f\g\i\i\y\r\y\y\w\p\y\b\f\6\r\p\l\9\m\o\r\f\r\8\x\o\l\4\f\p\8\g\4\t\k\q\z\p\h\u\4\d\n\i\t\3\d\4\x\t\k\s\g\f\a\p\q\h\h\i\l\c\a\8\2\s\8\n\f\k\4\k\l\x\6\b\v\v\x\s\n\3\v\t\j\9\t\7\t\y\k\x\w\9\9\y\d\1\0\m\s\a\j\j\u\1\y\1\d\0\y\g\5\4\8\n\v\j\4\1\w\8\4\7\i\o\x\8\u\l\l\x\x\b\p\v\e\9\b\e\1\f\n\7\x\a\o\0\v\0\v\w\p\w\5\h\a\b\9\5\s\x\s\v\y\y\a\c\5\m\7\e\n\a\0\l\u\h\z\y\q\w\7\l\j\1\g\c\9\l\d\5\y\s\r\b\s\g\s\p\2\x\b\e\2\o\b\z\g\t\h\1\i\f\u\h\3\i\l\1\g\x\o\w\f\h\6\4\4\1\x\k\b\f\c\r\9\q\r\t\c\g\g\6\n\8\d\u\p\p\8\h\5\1\8\c\8\e\g\g\9\5\o\o\7\9\d\5\5\l\u\q\u\1\j\w\h\j\8\p\f\b\v\8\l\m\0\q\b\5\8\p\1\d\r\w\4\u\i\x\j\8\6\4\s\y\h\s\0\7\f\q\b\9\0\6\b\d\s\n\n\5\2\7\3\9\r\a\l\x\g\y\b\c\v\8 ]] 00:07:57.813 06:32:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:57.813 06:32:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:57.813 [2024-12-05 06:32:53.151322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:57.813 [2024-12-05 06:32:53.151582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70020 ] 00:07:57.813 [2024-12-05 06:32:53.273294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.073 [2024-12-05 06:32:53.303551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.073  [2024-12-05T06:32:53.539Z] Copying: 512/512 [B] (average 166 kBps) 00:07:58.073 00:07:58.073 06:32:53 -- dd/posix.sh@93 -- # [[ c82sn9w74v25w12rulg7b3x42k09oanjkoui6ip0lva09vxzrih2dgfi52lqu930s812vhxgx8pfzu9poxq6of13fv43dhn1yqivwg4l6rdttka1y161bf8gepw889a394ajp1lz84p6luku08mqjp48utgjrqxksw422q492upq3hnafgiiyryywpybf6rpl9morfr8xol4fp8g4tkqzphu4dnit3d4xtksgfapqhhilca82s8nfk4klx6bvvxsn3vtj9t7tykxw99yd10msajju1y1d0yg548nvj41w847iox8ullxxbpve9be1fn7xao0v0vwpw5hab95sxsvyyac5m7ena0luhzyqw7lj1gc9ld5ysrbsgsp2xbe2obzgth1ifuh3il1gxowfh6441xkbfcr9qrtcgg6n8dupp8h518c8egg95oo79d55luqu1jwhj8pfbv8lm0qb58p1drw4uixj864syhs07fqb906bdsnn52739ralxgybcv8 == \c\8\2\s\n\9\w\7\4\v\2\5\w\1\2\r\u\l\g\7\b\3\x\4\2\k\0\9\o\a\n\j\k\o\u\i\6\i\p\0\l\v\a\0\9\v\x\z\r\i\h\2\d\g\f\i\5\2\l\q\u\9\3\0\s\8\1\2\v\h\x\g\x\8\p\f\z\u\9\p\o\x\q\6\o\f\1\3\f\v\4\3\d\h\n\1\y\q\i\v\w\g\4\l\6\r\d\t\t\k\a\1\y\1\6\1\b\f\8\g\e\p\w\8\8\9\a\3\9\4\a\j\p\1\l\z\8\4\p\6\l\u\k\u\0\8\m\q\j\p\4\8\u\t\g\j\r\q\x\k\s\w\4\2\2\q\4\9\2\u\p\q\3\h\n\a\f\g\i\i\y\r\y\y\w\p\y\b\f\6\r\p\l\9\m\o\r\f\r\8\x\o\l\4\f\p\8\g\4\t\k\q\z\p\h\u\4\d\n\i\t\3\d\4\x\t\k\s\g\f\a\p\q\h\h\i\l\c\a\8\2\s\8\n\f\k\4\k\l\x\6\b\v\v\x\s\n\3\v\t\j\9\t\7\t\y\k\x\w\9\9\y\d\1\0\m\s\a\j\j\u\1\y\1\d\0\y\g\5\4\8\n\v\j\4\1\w\8\4\7\i\o\x\8\u\l\l\x\x\b\p\v\e\9\b\e\1\f\n\7\x\a\o\0\v\0\v\w\p\w\5\h\a\b\9\5\s\x\s\v\y\y\a\c\5\m\7\e\n\a\0\l\u\h\z\y\q\w\7\l\j\1\g\c\9\l\d\5\y\s\r\b\s\g\s\p\2\x\b\e\2\o\b\z\g\t\h\1\i\f\u\h\3\i\l\1\g\x\o\w\f\h\6\4\4\1\x\k\b\f\c\r\9\q\r\t\c\g\g\6\n\8\d\u\p\p\8\h\5\1\8\c\8\e\g\g\9\5\o\o\7\9\d\5\5\l\u\q\u\1\j\w\h\j\8\p\f\b\v\8\l\m\0\q\b\5\8\p\1\d\r\w\4\u\i\x\j\8\6\4\s\y\h\s\0\7\f\q\b\9\0\6\b\d\s\n\n\5\2\7\3\9\r\a\l\x\g\y\b\c\v\8 ]] 00:07:58.073 00:07:58.073 real 0m3.035s 00:07:58.073 user 0m1.386s 00:07:58.073 sys 0m0.665s 00:07:58.073 06:32:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.073 ************************************ 00:07:58.073 END TEST dd_flags_misc 00:07:58.073 ************************************ 00:07:58.073 06:32:53 -- common/autotest_common.sh@10 -- # set +x 00:07:58.073 06:32:53 -- dd/posix.sh@131 -- # tests_forced_aio 00:07:58.073 06:32:53 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:58.073 * Second test run, disabling liburing, forcing AIO 00:07:58.073 06:32:53 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:58.073 06:32:53 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:58.073 06:32:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.073 06:32:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.073 06:32:53 -- common/autotest_common.sh@10 -- # set +x 00:07:58.073 ************************************ 00:07:58.073 START TEST dd_flag_append_forced_aio 00:07:58.073 ************************************ 00:07:58.073 06:32:53 -- common/autotest_common.sh@1114 -- # append 00:07:58.073 06:32:53 -- dd/posix.sh@16 -- # local dump0 00:07:58.073 06:32:53 -- dd/posix.sh@17 -- # local dump1 00:07:58.073 06:32:53 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:58.073 06:32:53 -- dd/common.sh@98 -- # xtrace_disable 00:07:58.073 06:32:53 -- common/autotest_common.sh@10 -- # set +x 00:07:58.073 06:32:53 -- dd/posix.sh@19 -- # dump0=lshp0wh3so7u4sqcqvtf9enlopluj8kt 00:07:58.333 06:32:53 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:58.333 06:32:53 -- dd/common.sh@98 -- # xtrace_disable 00:07:58.333 06:32:53 -- common/autotest_common.sh@10 -- # set +x 00:07:58.333 06:32:53 -- dd/posix.sh@20 -- # dump1=p0brom93ioicyrkjh57cp2ed4jqt30qx 00:07:58.333 06:32:53 -- dd/posix.sh@22 -- # printf %s lshp0wh3so7u4sqcqvtf9enlopluj8kt 00:07:58.333 06:32:53 -- dd/posix.sh@23 -- # printf %s p0brom93ioicyrkjh57cp2ed4jqt30qx 00:07:58.333 06:32:53 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:58.333 [2024-12-05 06:32:53.587034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:58.333 [2024-12-05 06:32:53.587123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70046 ] 00:07:58.333 [2024-12-05 06:32:53.726669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.333 [2024-12-05 06:32:53.755465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.333  [2024-12-05T06:32:54.058Z] Copying: 32/32 [B] (average 31 kBps) 00:07:58.592 00:07:58.592 06:32:53 -- dd/posix.sh@27 -- # [[ p0brom93ioicyrkjh57cp2ed4jqt30qxlshp0wh3so7u4sqcqvtf9enlopluj8kt == \p\0\b\r\o\m\9\3\i\o\i\c\y\r\k\j\h\5\7\c\p\2\e\d\4\j\q\t\3\0\q\x\l\s\h\p\0\w\h\3\s\o\7\u\4\s\q\c\q\v\t\f\9\e\n\l\o\p\l\u\j\8\k\t ]] 00:07:58.592 00:07:58.592 real 0m0.391s 00:07:58.592 user 0m0.175s 00:07:58.592 sys 0m0.095s 00:07:58.592 ************************************ 00:07:58.592 END TEST dd_flag_append_forced_aio 00:07:58.592 ************************************ 00:07:58.592 06:32:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.592 06:32:53 -- common/autotest_common.sh@10 -- # set +x 00:07:58.592 06:32:53 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:58.592 06:32:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.592 06:32:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.592 06:32:53 -- common/autotest_common.sh@10 -- # set +x 00:07:58.592 ************************************ 00:07:58.592 START TEST dd_flag_directory_forced_aio 00:07:58.592 ************************************ 00:07:58.592 06:32:53 -- common/autotest_common.sh@1114 -- # directory 00:07:58.592 06:32:53 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.592 06:32:53 -- common/autotest_common.sh@650 -- # local es=0 00:07:58.592 06:32:53 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.592 06:32:53 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.592 06:32:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.592 06:32:53 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.592 06:32:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.592 06:32:53 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.592 06:32:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.592 06:32:53 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.592 06:32:53 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.592 06:32:53 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.592 [2024-12-05 06:32:54.013061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:58.592 [2024-12-05 06:32:54.013144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70067 ] 00:07:58.851 [2024-12-05 06:32:54.135099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.851 [2024-12-05 06:32:54.164741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.851 [2024-12-05 06:32:54.204174] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.851 [2024-12-05 06:32:54.204225] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.851 [2024-12-05 06:32:54.204252] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.851 [2024-12-05 06:32:54.257640] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:58.851 06:32:54 -- common/autotest_common.sh@653 -- # es=236 00:07:58.851 06:32:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.851 06:32:54 -- common/autotest_common.sh@662 -- # es=108 00:07:59.110 06:32:54 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:59.110 06:32:54 -- common/autotest_common.sh@670 -- # es=1 00:07:59.110 06:32:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.110 06:32:54 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:59.110 06:32:54 -- common/autotest_common.sh@650 -- # local es=0 00:07:59.110 06:32:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:59.110 06:32:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.110 06:32:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.110 06:32:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.110 06:32:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.110 06:32:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.110 06:32:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.110 06:32:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.110 06:32:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.110 06:32:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:59.110 [2024-12-05 06:32:54.351950] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:59.110 [2024-12-05 06:32:54.352034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70077 ] 00:07:59.110 [2024-12-05 06:32:54.480021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.110 [2024-12-05 06:32:54.510629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.110 [2024-12-05 06:32:54.550799] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:59.110 [2024-12-05 06:32:54.550857] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:59.110 [2024-12-05 06:32:54.550868] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.369 [2024-12-05 06:32:54.604574] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:59.369 06:32:54 -- common/autotest_common.sh@653 -- # es=236 00:07:59.369 06:32:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.369 06:32:54 -- common/autotest_common.sh@662 -- # es=108 00:07:59.369 06:32:54 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:59.369 06:32:54 -- common/autotest_common.sh@670 -- # es=1 00:07:59.369 06:32:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.369 00:07:59.369 real 0m0.694s 00:07:59.369 user 0m0.325s 00:07:59.369 sys 0m0.163s 00:07:59.369 ************************************ 00:07:59.369 END TEST dd_flag_directory_forced_aio 00:07:59.369 ************************************ 00:07:59.369 06:32:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.369 06:32:54 -- common/autotest_common.sh@10 -- # set +x 00:07:59.369 06:32:54 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:59.369 06:32:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:59.369 06:32:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.369 06:32:54 -- common/autotest_common.sh@10 -- # set +x 00:07:59.369 ************************************ 00:07:59.369 START TEST dd_flag_nofollow_forced_aio 00:07:59.369 ************************************ 00:07:59.369 06:32:54 -- common/autotest_common.sh@1114 -- # nofollow 00:07:59.369 06:32:54 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:59.369 06:32:54 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:59.369 06:32:54 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:59.369 06:32:54 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:59.369 06:32:54 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.369 06:32:54 -- common/autotest_common.sh@650 -- # local es=0 00:07:59.369 06:32:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.369 06:32:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.369 06:32:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.369 06:32:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.369 06:32:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.369 06:32:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.370 06:32:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.370 06:32:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.370 06:32:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.370 06:32:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.370 [2024-12-05 06:32:54.775570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:59.370 [2024-12-05 06:32:54.775662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70105 ] 00:07:59.629 [2024-12-05 06:32:54.911394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.629 [2024-12-05 06:32:54.940653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.629 [2024-12-05 06:32:54.981210] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:59.629 [2024-12-05 06:32:54.981265] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:59.629 [2024-12-05 06:32:54.981295] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.629 [2024-12-05 06:32:55.037648] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:59.886 06:32:55 -- common/autotest_common.sh@653 -- # es=216 00:07:59.886 06:32:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.886 06:32:55 -- common/autotest_common.sh@662 -- # es=88 00:07:59.886 06:32:55 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:59.886 06:32:55 -- common/autotest_common.sh@670 -- # es=1 00:07:59.886 06:32:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.886 06:32:55 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:59.886 06:32:55 -- common/autotest_common.sh@650 -- # local es=0 00:07:59.886 06:32:55 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:59.886 06:32:55 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.886 06:32:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.886 06:32:55 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.886 06:32:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.886 06:32:55 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.886 06:32:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.886 06:32:55 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.886 06:32:55 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.887 06:32:55 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:59.887 [2024-12-05 06:32:55.145347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:59.887 [2024-12-05 06:32:55.145444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70115 ] 00:07:59.887 [2024-12-05 06:32:55.281880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.887 [2024-12-05 06:32:55.311069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.143 [2024-12-05 06:32:55.350624] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:00.143 [2024-12-05 06:32:55.350675] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:00.143 [2024-12-05 06:32:55.350706] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.143 [2024-12-05 06:32:55.404223] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:00.143 06:32:55 -- common/autotest_common.sh@653 -- # es=216 00:08:00.143 06:32:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:00.143 06:32:55 -- common/autotest_common.sh@662 -- # es=88 00:08:00.143 06:32:55 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:00.143 06:32:55 -- common/autotest_common.sh@670 -- # es=1 00:08:00.143 06:32:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:00.143 06:32:55 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:00.143 06:32:55 -- dd/common.sh@98 -- # xtrace_disable 00:08:00.143 06:32:55 -- common/autotest_common.sh@10 -- # set +x 00:08:00.143 06:32:55 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.143 [2024-12-05 06:32:55.518812] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:00.143 [2024-12-05 06:32:55.518957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70117 ] 00:08:00.401 [2024-12-05 06:32:55.653056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.401 [2024-12-05 06:32:55.682634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.401  [2024-12-05T06:32:55.867Z] Copying: 512/512 [B] (average 500 kBps) 00:08:00.401 00:08:00.659 06:32:55 -- dd/posix.sh@49 -- # [[ mk4x172y1yu0hc97bfma6c041kq7mppw1tw2g5wnz1uo0k64uo8175zdp0w5ws81t0ma1cmuc8l0n5rc0xiiiw09mvv8w4mlnsn1uddohvjoj8tao4wu6dykr80xfuliwxyswrvaycatro0110ljaqcd6kx2cbsc358eknjq3eet52828y8xtitt5gppojfxtp76jycu30g81txfqv4taa8hqo8n3wyzqrgwtw5ccwertvu0g77nqu70awpew1ne2kb7v64vg9zbvpukzkf0g1u93hn47avwte9fsy45ya1fzdq4ihucvkfm5sqlnmqxwd0txsz5m9kr3sf2m2dm3he37909t3szbpaftg6pvci73b4q1aftz8bdb4066uk0n202ixwcrah1tdxs75p7zqegyvtuhfgpw3ntv28fva08tk0sc7mz40fc9n8n2q5xq6kej53rav4ln45ixyptbpt82rx331rlqgb209j2jy9owkua89b3zmzfccbv56fa == \m\k\4\x\1\7\2\y\1\y\u\0\h\c\9\7\b\f\m\a\6\c\0\4\1\k\q\7\m\p\p\w\1\t\w\2\g\5\w\n\z\1\u\o\0\k\6\4\u\o\8\1\7\5\z\d\p\0\w\5\w\s\8\1\t\0\m\a\1\c\m\u\c\8\l\0\n\5\r\c\0\x\i\i\i\w\0\9\m\v\v\8\w\4\m\l\n\s\n\1\u\d\d\o\h\v\j\o\j\8\t\a\o\4\w\u\6\d\y\k\r\8\0\x\f\u\l\i\w\x\y\s\w\r\v\a\y\c\a\t\r\o\0\1\1\0\l\j\a\q\c\d\6\k\x\2\c\b\s\c\3\5\8\e\k\n\j\q\3\e\e\t\5\2\8\2\8\y\8\x\t\i\t\t\5\g\p\p\o\j\f\x\t\p\7\6\j\y\c\u\3\0\g\8\1\t\x\f\q\v\4\t\a\a\8\h\q\o\8\n\3\w\y\z\q\r\g\w\t\w\5\c\c\w\e\r\t\v\u\0\g\7\7\n\q\u\7\0\a\w\p\e\w\1\n\e\2\k\b\7\v\6\4\v\g\9\z\b\v\p\u\k\z\k\f\0\g\1\u\9\3\h\n\4\7\a\v\w\t\e\9\f\s\y\4\5\y\a\1\f\z\d\q\4\i\h\u\c\v\k\f\m\5\s\q\l\n\m\q\x\w\d\0\t\x\s\z\5\m\9\k\r\3\s\f\2\m\2\d\m\3\h\e\3\7\9\0\9\t\3\s\z\b\p\a\f\t\g\6\p\v\c\i\7\3\b\4\q\1\a\f\t\z\8\b\d\b\4\0\6\6\u\k\0\n\2\0\2\i\x\w\c\r\a\h\1\t\d\x\s\7\5\p\7\z\q\e\g\y\v\t\u\h\f\g\p\w\3\n\t\v\2\8\f\v\a\0\8\t\k\0\s\c\7\m\z\4\0\f\c\9\n\8\n\2\q\5\x\q\6\k\e\j\5\3\r\a\v\4\l\n\4\5\i\x\y\p\t\b\p\t\8\2\r\x\3\3\1\r\l\q\g\b\2\0\9\j\2\j\y\9\o\w\k\u\a\8\9\b\3\z\m\z\f\c\c\b\v\5\6\f\a ]] 00:08:00.659 00:08:00.659 real 0m1.152s 00:08:00.659 user 0m0.570s 00:08:00.659 sys 0m0.255s 00:08:00.659 06:32:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:00.659 06:32:55 -- common/autotest_common.sh@10 -- # set +x 00:08:00.659 ************************************ 00:08:00.659 END TEST dd_flag_nofollow_forced_aio 00:08:00.659 ************************************ 00:08:00.659 06:32:55 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:00.659 06:32:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:00.659 06:32:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.659 06:32:55 -- common/autotest_common.sh@10 -- # set +x 00:08:00.659 ************************************ 00:08:00.659 START TEST dd_flag_noatime_forced_aio 00:08:00.659 ************************************ 00:08:00.659 06:32:55 -- common/autotest_common.sh@1114 -- # noatime 00:08:00.659 06:32:55 -- dd/posix.sh@53 -- # local atime_if 00:08:00.659 06:32:55 -- dd/posix.sh@54 -- # local atime_of 00:08:00.659 06:32:55 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:00.659 06:32:55 -- dd/common.sh@98 -- # xtrace_disable 00:08:00.659 06:32:55 -- common/autotest_common.sh@10 -- # set +x 00:08:00.659 06:32:55 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:00.659 06:32:55 -- dd/posix.sh@60 -- # atime_if=1733380375 00:08:00.659 06:32:55 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.659 06:32:55 -- dd/posix.sh@61 -- # atime_of=1733380375 00:08:00.659 06:32:55 -- dd/posix.sh@66 -- # sleep 1 00:08:01.594 06:32:56 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.594 [2024-12-05 06:32:56.991197] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:01.594 [2024-12-05 06:32:56.991297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70157 ] 00:08:01.853 [2024-12-05 06:32:57.126535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.853 [2024-12-05 06:32:57.159778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.853  [2024-12-05T06:32:57.579Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.113 00:08:02.113 06:32:57 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.113 06:32:57 -- dd/posix.sh@69 -- # (( atime_if == 1733380375 )) 00:08:02.113 06:32:57 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.113 06:32:57 -- dd/posix.sh@70 -- # (( atime_of == 1733380375 )) 00:08:02.113 06:32:57 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.113 [2024-12-05 06:32:57.398310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:02.113 [2024-12-05 06:32:57.398427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70169 ] 00:08:02.113 [2024-12-05 06:32:57.533011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.113 [2024-12-05 06:32:57.562272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.373  [2024-12-05T06:32:57.839Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.373 00:08:02.373 06:32:57 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.373 06:32:57 -- dd/posix.sh@73 -- # (( atime_if < 1733380377 )) 00:08:02.373 00:08:02.373 real 0m1.833s 00:08:02.373 user 0m0.401s 00:08:02.373 sys 0m0.195s 00:08:02.373 06:32:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.373 ************************************ 00:08:02.373 END TEST dd_flag_noatime_forced_aio 00:08:02.373 ************************************ 00:08:02.373 06:32:57 -- common/autotest_common.sh@10 -- # set +x 00:08:02.373 06:32:57 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:02.373 06:32:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:02.373 06:32:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.373 06:32:57 -- common/autotest_common.sh@10 -- # set +x 00:08:02.373 ************************************ 00:08:02.373 START TEST dd_flags_misc_forced_aio 00:08:02.373 ************************************ 00:08:02.373 06:32:57 -- common/autotest_common.sh@1114 -- # io 00:08:02.373 06:32:57 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:02.373 06:32:57 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:02.373 06:32:57 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:02.373 06:32:57 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:02.373 06:32:57 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:02.373 06:32:57 -- dd/common.sh@98 -- # xtrace_disable 00:08:02.373 06:32:57 -- common/autotest_common.sh@10 -- # set +x 00:08:02.373 06:32:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.373 06:32:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:02.632 [2024-12-05 06:32:57.864656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:02.632 [2024-12-05 06:32:57.864912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70195 ] 00:08:02.632 [2024-12-05 06:32:58.001455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.632 [2024-12-05 06:32:58.031861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.632  [2024-12-05T06:32:58.358Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.892 00:08:02.892 06:32:58 -- dd/posix.sh@93 -- # [[ hj9dks6xd1zhba3d5g8roactqcdbigjw0aivmtzh6rscoypi3xwm9j1fbu3k0nmz9lhlomwidsvp2pikct8l811mnut6h4gpkya5g6h6echk3vfh8xdl3rm4ivlcjb9o7la7ql3efl6451f2uw9ulwy2czmcoftt863s6m5z3uhgnwcz56bn2xh8ubeo9ioe8dz20tbhezk4vbxtpwex1ucab69rfkc6o4rmfls4wf4lgy8ui7ub8s516kehlu9lm49rxo5xvkx6lsa0h1yhb01gv7ppx7k05gm658mm0uaxmkkvmy2azxqplkqmay6ab83swi198umlpzr5897dgsyc4bm4lqgbpqnog1tq8xwsh9n3bt1cb44xqo6bv1dk1cgyfc2n3wnsvrjo9fgqleja0fow0q1bkhji4ny47ygl5ktji7la6r23atazybjdyncvpqv8prji337hrobu36ym0wi4ipeoidvfaq5h85t3v5128t4z7k499kpfbzrb == \h\j\9\d\k\s\6\x\d\1\z\h\b\a\3\d\5\g\8\r\o\a\c\t\q\c\d\b\i\g\j\w\0\a\i\v\m\t\z\h\6\r\s\c\o\y\p\i\3\x\w\m\9\j\1\f\b\u\3\k\0\n\m\z\9\l\h\l\o\m\w\i\d\s\v\p\2\p\i\k\c\t\8\l\8\1\1\m\n\u\t\6\h\4\g\p\k\y\a\5\g\6\h\6\e\c\h\k\3\v\f\h\8\x\d\l\3\r\m\4\i\v\l\c\j\b\9\o\7\l\a\7\q\l\3\e\f\l\6\4\5\1\f\2\u\w\9\u\l\w\y\2\c\z\m\c\o\f\t\t\8\6\3\s\6\m\5\z\3\u\h\g\n\w\c\z\5\6\b\n\2\x\h\8\u\b\e\o\9\i\o\e\8\d\z\2\0\t\b\h\e\z\k\4\v\b\x\t\p\w\e\x\1\u\c\a\b\6\9\r\f\k\c\6\o\4\r\m\f\l\s\4\w\f\4\l\g\y\8\u\i\7\u\b\8\s\5\1\6\k\e\h\l\u\9\l\m\4\9\r\x\o\5\x\v\k\x\6\l\s\a\0\h\1\y\h\b\0\1\g\v\7\p\p\x\7\k\0\5\g\m\6\5\8\m\m\0\u\a\x\m\k\k\v\m\y\2\a\z\x\q\p\l\k\q\m\a\y\6\a\b\8\3\s\w\i\1\9\8\u\m\l\p\z\r\5\8\9\7\d\g\s\y\c\4\b\m\4\l\q\g\b\p\q\n\o\g\1\t\q\8\x\w\s\h\9\n\3\b\t\1\c\b\4\4\x\q\o\6\b\v\1\d\k\1\c\g\y\f\c\2\n\3\w\n\s\v\r\j\o\9\f\g\q\l\e\j\a\0\f\o\w\0\q\1\b\k\h\j\i\4\n\y\4\7\y\g\l\5\k\t\j\i\7\l\a\6\r\2\3\a\t\a\z\y\b\j\d\y\n\c\v\p\q\v\8\p\r\j\i\3\3\7\h\r\o\b\u\3\6\y\m\0\w\i\4\i\p\e\o\i\d\v\f\a\q\5\h\8\5\t\3\v\5\1\2\8\t\4\z\7\k\4\9\9\k\p\f\b\z\r\b ]] 00:08:02.892 06:32:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.892 06:32:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:02.892 [2024-12-05 06:32:58.250166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:02.892 [2024-12-05 06:32:58.250260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70203 ] 00:08:03.151 [2024-12-05 06:32:58.386595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.151 [2024-12-05 06:32:58.416050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.151  [2024-12-05T06:32:58.617Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.151 00:08:03.151 06:32:58 -- dd/posix.sh@93 -- # [[ hj9dks6xd1zhba3d5g8roactqcdbigjw0aivmtzh6rscoypi3xwm9j1fbu3k0nmz9lhlomwidsvp2pikct8l811mnut6h4gpkya5g6h6echk3vfh8xdl3rm4ivlcjb9o7la7ql3efl6451f2uw9ulwy2czmcoftt863s6m5z3uhgnwcz56bn2xh8ubeo9ioe8dz20tbhezk4vbxtpwex1ucab69rfkc6o4rmfls4wf4lgy8ui7ub8s516kehlu9lm49rxo5xvkx6lsa0h1yhb01gv7ppx7k05gm658mm0uaxmkkvmy2azxqplkqmay6ab83swi198umlpzr5897dgsyc4bm4lqgbpqnog1tq8xwsh9n3bt1cb44xqo6bv1dk1cgyfc2n3wnsvrjo9fgqleja0fow0q1bkhji4ny47ygl5ktji7la6r23atazybjdyncvpqv8prji337hrobu36ym0wi4ipeoidvfaq5h85t3v5128t4z7k499kpfbzrb == \h\j\9\d\k\s\6\x\d\1\z\h\b\a\3\d\5\g\8\r\o\a\c\t\q\c\d\b\i\g\j\w\0\a\i\v\m\t\z\h\6\r\s\c\o\y\p\i\3\x\w\m\9\j\1\f\b\u\3\k\0\n\m\z\9\l\h\l\o\m\w\i\d\s\v\p\2\p\i\k\c\t\8\l\8\1\1\m\n\u\t\6\h\4\g\p\k\y\a\5\g\6\h\6\e\c\h\k\3\v\f\h\8\x\d\l\3\r\m\4\i\v\l\c\j\b\9\o\7\l\a\7\q\l\3\e\f\l\6\4\5\1\f\2\u\w\9\u\l\w\y\2\c\z\m\c\o\f\t\t\8\6\3\s\6\m\5\z\3\u\h\g\n\w\c\z\5\6\b\n\2\x\h\8\u\b\e\o\9\i\o\e\8\d\z\2\0\t\b\h\e\z\k\4\v\b\x\t\p\w\e\x\1\u\c\a\b\6\9\r\f\k\c\6\o\4\r\m\f\l\s\4\w\f\4\l\g\y\8\u\i\7\u\b\8\s\5\1\6\k\e\h\l\u\9\l\m\4\9\r\x\o\5\x\v\k\x\6\l\s\a\0\h\1\y\h\b\0\1\g\v\7\p\p\x\7\k\0\5\g\m\6\5\8\m\m\0\u\a\x\m\k\k\v\m\y\2\a\z\x\q\p\l\k\q\m\a\y\6\a\b\8\3\s\w\i\1\9\8\u\m\l\p\z\r\5\8\9\7\d\g\s\y\c\4\b\m\4\l\q\g\b\p\q\n\o\g\1\t\q\8\x\w\s\h\9\n\3\b\t\1\c\b\4\4\x\q\o\6\b\v\1\d\k\1\c\g\y\f\c\2\n\3\w\n\s\v\r\j\o\9\f\g\q\l\e\j\a\0\f\o\w\0\q\1\b\k\h\j\i\4\n\y\4\7\y\g\l\5\k\t\j\i\7\l\a\6\r\2\3\a\t\a\z\y\b\j\d\y\n\c\v\p\q\v\8\p\r\j\i\3\3\7\h\r\o\b\u\3\6\y\m\0\w\i\4\i\p\e\o\i\d\v\f\a\q\5\h\8\5\t\3\v\5\1\2\8\t\4\z\7\k\4\9\9\k\p\f\b\z\r\b ]] 00:08:03.151 06:32:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.151 06:32:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:03.411 [2024-12-05 06:32:58.625679] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:03.411 [2024-12-05 06:32:58.625775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70205 ] 00:08:03.411 [2024-12-05 06:32:58.762521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.411 [2024-12-05 06:32:58.792127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.411  [2024-12-05T06:32:59.136Z] Copying: 512/512 [B] (average 166 kBps) 00:08:03.670 00:08:03.670 06:32:58 -- dd/posix.sh@93 -- # [[ hj9dks6xd1zhba3d5g8roactqcdbigjw0aivmtzh6rscoypi3xwm9j1fbu3k0nmz9lhlomwidsvp2pikct8l811mnut6h4gpkya5g6h6echk3vfh8xdl3rm4ivlcjb9o7la7ql3efl6451f2uw9ulwy2czmcoftt863s6m5z3uhgnwcz56bn2xh8ubeo9ioe8dz20tbhezk4vbxtpwex1ucab69rfkc6o4rmfls4wf4lgy8ui7ub8s516kehlu9lm49rxo5xvkx6lsa0h1yhb01gv7ppx7k05gm658mm0uaxmkkvmy2azxqplkqmay6ab83swi198umlpzr5897dgsyc4bm4lqgbpqnog1tq8xwsh9n3bt1cb44xqo6bv1dk1cgyfc2n3wnsvrjo9fgqleja0fow0q1bkhji4ny47ygl5ktji7la6r23atazybjdyncvpqv8prji337hrobu36ym0wi4ipeoidvfaq5h85t3v5128t4z7k499kpfbzrb == \h\j\9\d\k\s\6\x\d\1\z\h\b\a\3\d\5\g\8\r\o\a\c\t\q\c\d\b\i\g\j\w\0\a\i\v\m\t\z\h\6\r\s\c\o\y\p\i\3\x\w\m\9\j\1\f\b\u\3\k\0\n\m\z\9\l\h\l\o\m\w\i\d\s\v\p\2\p\i\k\c\t\8\l\8\1\1\m\n\u\t\6\h\4\g\p\k\y\a\5\g\6\h\6\e\c\h\k\3\v\f\h\8\x\d\l\3\r\m\4\i\v\l\c\j\b\9\o\7\l\a\7\q\l\3\e\f\l\6\4\5\1\f\2\u\w\9\u\l\w\y\2\c\z\m\c\o\f\t\t\8\6\3\s\6\m\5\z\3\u\h\g\n\w\c\z\5\6\b\n\2\x\h\8\u\b\e\o\9\i\o\e\8\d\z\2\0\t\b\h\e\z\k\4\v\b\x\t\p\w\e\x\1\u\c\a\b\6\9\r\f\k\c\6\o\4\r\m\f\l\s\4\w\f\4\l\g\y\8\u\i\7\u\b\8\s\5\1\6\k\e\h\l\u\9\l\m\4\9\r\x\o\5\x\v\k\x\6\l\s\a\0\h\1\y\h\b\0\1\g\v\7\p\p\x\7\k\0\5\g\m\6\5\8\m\m\0\u\a\x\m\k\k\v\m\y\2\a\z\x\q\p\l\k\q\m\a\y\6\a\b\8\3\s\w\i\1\9\8\u\m\l\p\z\r\5\8\9\7\d\g\s\y\c\4\b\m\4\l\q\g\b\p\q\n\o\g\1\t\q\8\x\w\s\h\9\n\3\b\t\1\c\b\4\4\x\q\o\6\b\v\1\d\k\1\c\g\y\f\c\2\n\3\w\n\s\v\r\j\o\9\f\g\q\l\e\j\a\0\f\o\w\0\q\1\b\k\h\j\i\4\n\y\4\7\y\g\l\5\k\t\j\i\7\l\a\6\r\2\3\a\t\a\z\y\b\j\d\y\n\c\v\p\q\v\8\p\r\j\i\3\3\7\h\r\o\b\u\3\6\y\m\0\w\i\4\i\p\e\o\i\d\v\f\a\q\5\h\8\5\t\3\v\5\1\2\8\t\4\z\7\k\4\9\9\k\p\f\b\z\r\b ]] 00:08:03.670 06:32:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.670 06:32:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:03.670 [2024-12-05 06:32:59.022979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:03.670 [2024-12-05 06:32:59.023074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70212 ] 00:08:03.929 [2024-12-05 06:32:59.158450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.929 [2024-12-05 06:32:59.191838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.929  [2024-12-05T06:32:59.395Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.929 00:08:03.929 06:32:59 -- dd/posix.sh@93 -- # [[ hj9dks6xd1zhba3d5g8roactqcdbigjw0aivmtzh6rscoypi3xwm9j1fbu3k0nmz9lhlomwidsvp2pikct8l811mnut6h4gpkya5g6h6echk3vfh8xdl3rm4ivlcjb9o7la7ql3efl6451f2uw9ulwy2czmcoftt863s6m5z3uhgnwcz56bn2xh8ubeo9ioe8dz20tbhezk4vbxtpwex1ucab69rfkc6o4rmfls4wf4lgy8ui7ub8s516kehlu9lm49rxo5xvkx6lsa0h1yhb01gv7ppx7k05gm658mm0uaxmkkvmy2azxqplkqmay6ab83swi198umlpzr5897dgsyc4bm4lqgbpqnog1tq8xwsh9n3bt1cb44xqo6bv1dk1cgyfc2n3wnsvrjo9fgqleja0fow0q1bkhji4ny47ygl5ktji7la6r23atazybjdyncvpqv8prji337hrobu36ym0wi4ipeoidvfaq5h85t3v5128t4z7k499kpfbzrb == \h\j\9\d\k\s\6\x\d\1\z\h\b\a\3\d\5\g\8\r\o\a\c\t\q\c\d\b\i\g\j\w\0\a\i\v\m\t\z\h\6\r\s\c\o\y\p\i\3\x\w\m\9\j\1\f\b\u\3\k\0\n\m\z\9\l\h\l\o\m\w\i\d\s\v\p\2\p\i\k\c\t\8\l\8\1\1\m\n\u\t\6\h\4\g\p\k\y\a\5\g\6\h\6\e\c\h\k\3\v\f\h\8\x\d\l\3\r\m\4\i\v\l\c\j\b\9\o\7\l\a\7\q\l\3\e\f\l\6\4\5\1\f\2\u\w\9\u\l\w\y\2\c\z\m\c\o\f\t\t\8\6\3\s\6\m\5\z\3\u\h\g\n\w\c\z\5\6\b\n\2\x\h\8\u\b\e\o\9\i\o\e\8\d\z\2\0\t\b\h\e\z\k\4\v\b\x\t\p\w\e\x\1\u\c\a\b\6\9\r\f\k\c\6\o\4\r\m\f\l\s\4\w\f\4\l\g\y\8\u\i\7\u\b\8\s\5\1\6\k\e\h\l\u\9\l\m\4\9\r\x\o\5\x\v\k\x\6\l\s\a\0\h\1\y\h\b\0\1\g\v\7\p\p\x\7\k\0\5\g\m\6\5\8\m\m\0\u\a\x\m\k\k\v\m\y\2\a\z\x\q\p\l\k\q\m\a\y\6\a\b\8\3\s\w\i\1\9\8\u\m\l\p\z\r\5\8\9\7\d\g\s\y\c\4\b\m\4\l\q\g\b\p\q\n\o\g\1\t\q\8\x\w\s\h\9\n\3\b\t\1\c\b\4\4\x\q\o\6\b\v\1\d\k\1\c\g\y\f\c\2\n\3\w\n\s\v\r\j\o\9\f\g\q\l\e\j\a\0\f\o\w\0\q\1\b\k\h\j\i\4\n\y\4\7\y\g\l\5\k\t\j\i\7\l\a\6\r\2\3\a\t\a\z\y\b\j\d\y\n\c\v\p\q\v\8\p\r\j\i\3\3\7\h\r\o\b\u\3\6\y\m\0\w\i\4\i\p\e\o\i\d\v\f\a\q\5\h\8\5\t\3\v\5\1\2\8\t\4\z\7\k\4\9\9\k\p\f\b\z\r\b ]] 00:08:03.929 06:32:59 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:03.929 06:32:59 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:03.929 06:32:59 -- dd/common.sh@98 -- # xtrace_disable 00:08:03.929 06:32:59 -- common/autotest_common.sh@10 -- # set +x 00:08:03.929 06:32:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.929 06:32:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:04.189 [2024-12-05 06:32:59.433894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:04.189 [2024-12-05 06:32:59.434149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70220 ] 00:08:04.189 [2024-12-05 06:32:59.569257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.189 [2024-12-05 06:32:59.598671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.189  [2024-12-05T06:32:59.914Z] Copying: 512/512 [B] (average 500 kBps) 00:08:04.448 00:08:04.448 06:32:59 -- dd/posix.sh@93 -- # [[ 468f7mm4enlcv0ghl47nsy5s5eqgct03tl2pe7guurhh4qy9bwhv5z68jh3jsbp64r0k41szcfi26b2eronjws3fmm0t7zdu1058chn1n24humoie9y6shriy4f4cbgx3lulx5fdrrh3tqrxlr6vszn467y14kbae128zd72x0hxenroo62ri2bqvmugof1annd6axo1dtuunh2uubmls8vvt9vbasb9h8hs92n67mzahzl80rb0yucnicgsoqi5wngdzmnkpatppjimcwmjzzi1tchzmsfwgy18rhy3v1ajxfpso6bavoip90debo5d7wlh4x8ejiuxrknbkssn0g4qus3ejtqrsj8wn40ot3fx33bhr7f7gmoo2m6nkbo8ea5ewzirbggmchvs1v779beay0rzps59ycbvm81nfu32pq5qpw9urw3zqjmw3vqadogtcyglrf436q05ouphwiavuc90ew7kcf7yi1epfiy4np8av6j882fmu31mcas8 == \4\6\8\f\7\m\m\4\e\n\l\c\v\0\g\h\l\4\7\n\s\y\5\s\5\e\q\g\c\t\0\3\t\l\2\p\e\7\g\u\u\r\h\h\4\q\y\9\b\w\h\v\5\z\6\8\j\h\3\j\s\b\p\6\4\r\0\k\4\1\s\z\c\f\i\2\6\b\2\e\r\o\n\j\w\s\3\f\m\m\0\t\7\z\d\u\1\0\5\8\c\h\n\1\n\2\4\h\u\m\o\i\e\9\y\6\s\h\r\i\y\4\f\4\c\b\g\x\3\l\u\l\x\5\f\d\r\r\h\3\t\q\r\x\l\r\6\v\s\z\n\4\6\7\y\1\4\k\b\a\e\1\2\8\z\d\7\2\x\0\h\x\e\n\r\o\o\6\2\r\i\2\b\q\v\m\u\g\o\f\1\a\n\n\d\6\a\x\o\1\d\t\u\u\n\h\2\u\u\b\m\l\s\8\v\v\t\9\v\b\a\s\b\9\h\8\h\s\9\2\n\6\7\m\z\a\h\z\l\8\0\r\b\0\y\u\c\n\i\c\g\s\o\q\i\5\w\n\g\d\z\m\n\k\p\a\t\p\p\j\i\m\c\w\m\j\z\z\i\1\t\c\h\z\m\s\f\w\g\y\1\8\r\h\y\3\v\1\a\j\x\f\p\s\o\6\b\a\v\o\i\p\9\0\d\e\b\o\5\d\7\w\l\h\4\x\8\e\j\i\u\x\r\k\n\b\k\s\s\n\0\g\4\q\u\s\3\e\j\t\q\r\s\j\8\w\n\4\0\o\t\3\f\x\3\3\b\h\r\7\f\7\g\m\o\o\2\m\6\n\k\b\o\8\e\a\5\e\w\z\i\r\b\g\g\m\c\h\v\s\1\v\7\7\9\b\e\a\y\0\r\z\p\s\5\9\y\c\b\v\m\8\1\n\f\u\3\2\p\q\5\q\p\w\9\u\r\w\3\z\q\j\m\w\3\v\q\a\d\o\g\t\c\y\g\l\r\f\4\3\6\q\0\5\o\u\p\h\w\i\a\v\u\c\9\0\e\w\7\k\c\f\7\y\i\1\e\p\f\i\y\4\n\p\8\a\v\6\j\8\8\2\f\m\u\3\1\m\c\a\s\8 ]] 00:08:04.448 06:32:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:04.448 06:32:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:04.448 [2024-12-05 06:32:59.846964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:04.448 [2024-12-05 06:32:59.847058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70227 ] 00:08:04.707 [2024-12-05 06:32:59.981547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.707 [2024-12-05 06:33:00.012521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.707  [2024-12-05T06:33:00.432Z] Copying: 512/512 [B] (average 500 kBps) 00:08:04.966 00:08:04.966 06:33:00 -- dd/posix.sh@93 -- # [[ 468f7mm4enlcv0ghl47nsy5s5eqgct03tl2pe7guurhh4qy9bwhv5z68jh3jsbp64r0k41szcfi26b2eronjws3fmm0t7zdu1058chn1n24humoie9y6shriy4f4cbgx3lulx5fdrrh3tqrxlr6vszn467y14kbae128zd72x0hxenroo62ri2bqvmugof1annd6axo1dtuunh2uubmls8vvt9vbasb9h8hs92n67mzahzl80rb0yucnicgsoqi5wngdzmnkpatppjimcwmjzzi1tchzmsfwgy18rhy3v1ajxfpso6bavoip90debo5d7wlh4x8ejiuxrknbkssn0g4qus3ejtqrsj8wn40ot3fx33bhr7f7gmoo2m6nkbo8ea5ewzirbggmchvs1v779beay0rzps59ycbvm81nfu32pq5qpw9urw3zqjmw3vqadogtcyglrf436q05ouphwiavuc90ew7kcf7yi1epfiy4np8av6j882fmu31mcas8 == \4\6\8\f\7\m\m\4\e\n\l\c\v\0\g\h\l\4\7\n\s\y\5\s\5\e\q\g\c\t\0\3\t\l\2\p\e\7\g\u\u\r\h\h\4\q\y\9\b\w\h\v\5\z\6\8\j\h\3\j\s\b\p\6\4\r\0\k\4\1\s\z\c\f\i\2\6\b\2\e\r\o\n\j\w\s\3\f\m\m\0\t\7\z\d\u\1\0\5\8\c\h\n\1\n\2\4\h\u\m\o\i\e\9\y\6\s\h\r\i\y\4\f\4\c\b\g\x\3\l\u\l\x\5\f\d\r\r\h\3\t\q\r\x\l\r\6\v\s\z\n\4\6\7\y\1\4\k\b\a\e\1\2\8\z\d\7\2\x\0\h\x\e\n\r\o\o\6\2\r\i\2\b\q\v\m\u\g\o\f\1\a\n\n\d\6\a\x\o\1\d\t\u\u\n\h\2\u\u\b\m\l\s\8\v\v\t\9\v\b\a\s\b\9\h\8\h\s\9\2\n\6\7\m\z\a\h\z\l\8\0\r\b\0\y\u\c\n\i\c\g\s\o\q\i\5\w\n\g\d\z\m\n\k\p\a\t\p\p\j\i\m\c\w\m\j\z\z\i\1\t\c\h\z\m\s\f\w\g\y\1\8\r\h\y\3\v\1\a\j\x\f\p\s\o\6\b\a\v\o\i\p\9\0\d\e\b\o\5\d\7\w\l\h\4\x\8\e\j\i\u\x\r\k\n\b\k\s\s\n\0\g\4\q\u\s\3\e\j\t\q\r\s\j\8\w\n\4\0\o\t\3\f\x\3\3\b\h\r\7\f\7\g\m\o\o\2\m\6\n\k\b\o\8\e\a\5\e\w\z\i\r\b\g\g\m\c\h\v\s\1\v\7\7\9\b\e\a\y\0\r\z\p\s\5\9\y\c\b\v\m\8\1\n\f\u\3\2\p\q\5\q\p\w\9\u\r\w\3\z\q\j\m\w\3\v\q\a\d\o\g\t\c\y\g\l\r\f\4\3\6\q\0\5\o\u\p\h\w\i\a\v\u\c\9\0\e\w\7\k\c\f\7\y\i\1\e\p\f\i\y\4\n\p\8\a\v\6\j\8\8\2\f\m\u\3\1\m\c\a\s\8 ]] 00:08:04.966 06:33:00 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:04.966 06:33:00 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:04.966 [2024-12-05 06:33:00.229229] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:04.966 [2024-12-05 06:33:00.229480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70235 ] 00:08:04.966 [2024-12-05 06:33:00.352143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.966 [2024-12-05 06:33:00.384513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.225  [2024-12-05T06:33:00.691Z] Copying: 512/512 [B] (average 500 kBps) 00:08:05.225 00:08:05.225 06:33:00 -- dd/posix.sh@93 -- # [[ 468f7mm4enlcv0ghl47nsy5s5eqgct03tl2pe7guurhh4qy9bwhv5z68jh3jsbp64r0k41szcfi26b2eronjws3fmm0t7zdu1058chn1n24humoie9y6shriy4f4cbgx3lulx5fdrrh3tqrxlr6vszn467y14kbae128zd72x0hxenroo62ri2bqvmugof1annd6axo1dtuunh2uubmls8vvt9vbasb9h8hs92n67mzahzl80rb0yucnicgsoqi5wngdzmnkpatppjimcwmjzzi1tchzmsfwgy18rhy3v1ajxfpso6bavoip90debo5d7wlh4x8ejiuxrknbkssn0g4qus3ejtqrsj8wn40ot3fx33bhr7f7gmoo2m6nkbo8ea5ewzirbggmchvs1v779beay0rzps59ycbvm81nfu32pq5qpw9urw3zqjmw3vqadogtcyglrf436q05ouphwiavuc90ew7kcf7yi1epfiy4np8av6j882fmu31mcas8 == \4\6\8\f\7\m\m\4\e\n\l\c\v\0\g\h\l\4\7\n\s\y\5\s\5\e\q\g\c\t\0\3\t\l\2\p\e\7\g\u\u\r\h\h\4\q\y\9\b\w\h\v\5\z\6\8\j\h\3\j\s\b\p\6\4\r\0\k\4\1\s\z\c\f\i\2\6\b\2\e\r\o\n\j\w\s\3\f\m\m\0\t\7\z\d\u\1\0\5\8\c\h\n\1\n\2\4\h\u\m\o\i\e\9\y\6\s\h\r\i\y\4\f\4\c\b\g\x\3\l\u\l\x\5\f\d\r\r\h\3\t\q\r\x\l\r\6\v\s\z\n\4\6\7\y\1\4\k\b\a\e\1\2\8\z\d\7\2\x\0\h\x\e\n\r\o\o\6\2\r\i\2\b\q\v\m\u\g\o\f\1\a\n\n\d\6\a\x\o\1\d\t\u\u\n\h\2\u\u\b\m\l\s\8\v\v\t\9\v\b\a\s\b\9\h\8\h\s\9\2\n\6\7\m\z\a\h\z\l\8\0\r\b\0\y\u\c\n\i\c\g\s\o\q\i\5\w\n\g\d\z\m\n\k\p\a\t\p\p\j\i\m\c\w\m\j\z\z\i\1\t\c\h\z\m\s\f\w\g\y\1\8\r\h\y\3\v\1\a\j\x\f\p\s\o\6\b\a\v\o\i\p\9\0\d\e\b\o\5\d\7\w\l\h\4\x\8\e\j\i\u\x\r\k\n\b\k\s\s\n\0\g\4\q\u\s\3\e\j\t\q\r\s\j\8\w\n\4\0\o\t\3\f\x\3\3\b\h\r\7\f\7\g\m\o\o\2\m\6\n\k\b\o\8\e\a\5\e\w\z\i\r\b\g\g\m\c\h\v\s\1\v\7\7\9\b\e\a\y\0\r\z\p\s\5\9\y\c\b\v\m\8\1\n\f\u\3\2\p\q\5\q\p\w\9\u\r\w\3\z\q\j\m\w\3\v\q\a\d\o\g\t\c\y\g\l\r\f\4\3\6\q\0\5\o\u\p\h\w\i\a\v\u\c\9\0\e\w\7\k\c\f\7\y\i\1\e\p\f\i\y\4\n\p\8\a\v\6\j\8\8\2\f\m\u\3\1\m\c\a\s\8 ]] 00:08:05.225 06:33:00 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:05.225 06:33:00 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:05.225 [2024-12-05 06:33:00.629678] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:05.226 [2024-12-05 06:33:00.629796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70237 ] 00:08:05.485 [2024-12-05 06:33:00.768101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.485 [2024-12-05 06:33:00.803063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.485  [2024-12-05T06:33:01.210Z] Copying: 512/512 [B] (average 500 kBps) 00:08:05.744 00:08:05.744 ************************************ 00:08:05.744 END TEST dd_flags_misc_forced_aio 00:08:05.744 ************************************ 00:08:05.744 06:33:01 -- dd/posix.sh@93 -- # [[ 468f7mm4enlcv0ghl47nsy5s5eqgct03tl2pe7guurhh4qy9bwhv5z68jh3jsbp64r0k41szcfi26b2eronjws3fmm0t7zdu1058chn1n24humoie9y6shriy4f4cbgx3lulx5fdrrh3tqrxlr6vszn467y14kbae128zd72x0hxenroo62ri2bqvmugof1annd6axo1dtuunh2uubmls8vvt9vbasb9h8hs92n67mzahzl80rb0yucnicgsoqi5wngdzmnkpatppjimcwmjzzi1tchzmsfwgy18rhy3v1ajxfpso6bavoip90debo5d7wlh4x8ejiuxrknbkssn0g4qus3ejtqrsj8wn40ot3fx33bhr7f7gmoo2m6nkbo8ea5ewzirbggmchvs1v779beay0rzps59ycbvm81nfu32pq5qpw9urw3zqjmw3vqadogtcyglrf436q05ouphwiavuc90ew7kcf7yi1epfiy4np8av6j882fmu31mcas8 == \4\6\8\f\7\m\m\4\e\n\l\c\v\0\g\h\l\4\7\n\s\y\5\s\5\e\q\g\c\t\0\3\t\l\2\p\e\7\g\u\u\r\h\h\4\q\y\9\b\w\h\v\5\z\6\8\j\h\3\j\s\b\p\6\4\r\0\k\4\1\s\z\c\f\i\2\6\b\2\e\r\o\n\j\w\s\3\f\m\m\0\t\7\z\d\u\1\0\5\8\c\h\n\1\n\2\4\h\u\m\o\i\e\9\y\6\s\h\r\i\y\4\f\4\c\b\g\x\3\l\u\l\x\5\f\d\r\r\h\3\t\q\r\x\l\r\6\v\s\z\n\4\6\7\y\1\4\k\b\a\e\1\2\8\z\d\7\2\x\0\h\x\e\n\r\o\o\6\2\r\i\2\b\q\v\m\u\g\o\f\1\a\n\n\d\6\a\x\o\1\d\t\u\u\n\h\2\u\u\b\m\l\s\8\v\v\t\9\v\b\a\s\b\9\h\8\h\s\9\2\n\6\7\m\z\a\h\z\l\8\0\r\b\0\y\u\c\n\i\c\g\s\o\q\i\5\w\n\g\d\z\m\n\k\p\a\t\p\p\j\i\m\c\w\m\j\z\z\i\1\t\c\h\z\m\s\f\w\g\y\1\8\r\h\y\3\v\1\a\j\x\f\p\s\o\6\b\a\v\o\i\p\9\0\d\e\b\o\5\d\7\w\l\h\4\x\8\e\j\i\u\x\r\k\n\b\k\s\s\n\0\g\4\q\u\s\3\e\j\t\q\r\s\j\8\w\n\4\0\o\t\3\f\x\3\3\b\h\r\7\f\7\g\m\o\o\2\m\6\n\k\b\o\8\e\a\5\e\w\z\i\r\b\g\g\m\c\h\v\s\1\v\7\7\9\b\e\a\y\0\r\z\p\s\5\9\y\c\b\v\m\8\1\n\f\u\3\2\p\q\5\q\p\w\9\u\r\w\3\z\q\j\m\w\3\v\q\a\d\o\g\t\c\y\g\l\r\f\4\3\6\q\0\5\o\u\p\h\w\i\a\v\u\c\9\0\e\w\7\k\c\f\7\y\i\1\e\p\f\i\y\4\n\p\8\a\v\6\j\8\8\2\f\m\u\3\1\m\c\a\s\8 ]] 00:08:05.744 00:08:05.744 real 0m3.210s 00:08:05.744 user 0m1.502s 00:08:05.744 sys 0m0.718s 00:08:05.744 06:33:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.744 06:33:01 -- common/autotest_common.sh@10 -- # set +x 00:08:05.744 06:33:01 -- dd/posix.sh@1 -- # cleanup 00:08:05.744 06:33:01 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:05.744 06:33:01 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:05.744 00:08:05.744 real 0m15.173s 00:08:05.744 user 0m6.142s 00:08:05.744 sys 0m3.211s 00:08:05.744 06:33:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.744 ************************************ 00:08:05.744 END TEST spdk_dd_posix 00:08:05.744 ************************************ 00:08:05.744 06:33:01 -- common/autotest_common.sh@10 -- # set +x 00:08:05.744 06:33:01 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:05.744 06:33:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.744 06:33:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.744 06:33:01 -- common/autotest_common.sh@10 -- # set +x 00:08:05.744 ************************************ 00:08:05.744 START TEST spdk_dd_malloc 00:08:05.744 ************************************ 00:08:05.744 06:33:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:05.744 * Looking for test storage... 00:08:06.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:06.004 06:33:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:06.004 06:33:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:06.005 06:33:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:06.005 06:33:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:06.005 06:33:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:06.005 06:33:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:06.005 06:33:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:06.005 06:33:01 -- scripts/common.sh@335 -- # IFS=.-: 00:08:06.005 06:33:01 -- scripts/common.sh@335 -- # read -ra ver1 00:08:06.005 06:33:01 -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.005 06:33:01 -- scripts/common.sh@336 -- # read -ra ver2 00:08:06.005 06:33:01 -- scripts/common.sh@337 -- # local 'op=<' 00:08:06.005 06:33:01 -- scripts/common.sh@339 -- # ver1_l=2 00:08:06.005 06:33:01 -- scripts/common.sh@340 -- # ver2_l=1 00:08:06.005 06:33:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:06.005 06:33:01 -- scripts/common.sh@343 -- # case "$op" in 00:08:06.005 06:33:01 -- scripts/common.sh@344 -- # : 1 00:08:06.005 06:33:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:06.005 06:33:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.005 06:33:01 -- scripts/common.sh@364 -- # decimal 1 00:08:06.005 06:33:01 -- scripts/common.sh@352 -- # local d=1 00:08:06.005 06:33:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.005 06:33:01 -- scripts/common.sh@354 -- # echo 1 00:08:06.005 06:33:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:06.005 06:33:01 -- scripts/common.sh@365 -- # decimal 2 00:08:06.005 06:33:01 -- scripts/common.sh@352 -- # local d=2 00:08:06.005 06:33:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.005 06:33:01 -- scripts/common.sh@354 -- # echo 2 00:08:06.005 06:33:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:06.005 06:33:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:06.005 06:33:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:06.005 06:33:01 -- scripts/common.sh@367 -- # return 0 00:08:06.005 06:33:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.005 06:33:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:06.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.005 --rc genhtml_branch_coverage=1 00:08:06.005 --rc genhtml_function_coverage=1 00:08:06.005 --rc genhtml_legend=1 00:08:06.005 --rc geninfo_all_blocks=1 00:08:06.005 --rc geninfo_unexecuted_blocks=1 00:08:06.005 00:08:06.005 ' 00:08:06.005 06:33:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:06.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.005 --rc genhtml_branch_coverage=1 00:08:06.005 --rc genhtml_function_coverage=1 00:08:06.005 --rc genhtml_legend=1 00:08:06.005 --rc geninfo_all_blocks=1 00:08:06.005 --rc geninfo_unexecuted_blocks=1 00:08:06.005 00:08:06.005 ' 00:08:06.005 06:33:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:06.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.005 --rc genhtml_branch_coverage=1 00:08:06.005 --rc genhtml_function_coverage=1 00:08:06.005 --rc genhtml_legend=1 00:08:06.005 --rc geninfo_all_blocks=1 00:08:06.005 --rc geninfo_unexecuted_blocks=1 00:08:06.005 00:08:06.005 ' 00:08:06.005 06:33:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:06.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.005 --rc genhtml_branch_coverage=1 00:08:06.005 --rc genhtml_function_coverage=1 00:08:06.005 --rc genhtml_legend=1 00:08:06.005 --rc geninfo_all_blocks=1 00:08:06.005 --rc geninfo_unexecuted_blocks=1 00:08:06.005 00:08:06.005 ' 00:08:06.005 06:33:01 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:06.005 06:33:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.005 06:33:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.005 06:33:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.005 06:33:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.005 06:33:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.005 06:33:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.005 06:33:01 -- paths/export.sh@5 -- # export PATH 00:08:06.005 06:33:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.005 06:33:01 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:06.005 06:33:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:06.005 06:33:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.005 06:33:01 -- common/autotest_common.sh@10 -- # set +x 00:08:06.005 ************************************ 00:08:06.005 START TEST dd_malloc_copy 00:08:06.005 ************************************ 00:08:06.005 06:33:01 -- common/autotest_common.sh@1114 -- # malloc_copy 00:08:06.005 06:33:01 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:06.005 06:33:01 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:06.005 06:33:01 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:06.005 06:33:01 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:06.005 06:33:01 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:06.005 06:33:01 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:06.005 06:33:01 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:06.005 06:33:01 -- dd/malloc.sh@28 -- # gen_conf 00:08:06.005 06:33:01 -- dd/common.sh@31 -- # xtrace_disable 00:08:06.005 06:33:01 -- common/autotest_common.sh@10 -- # set +x 00:08:06.005 [2024-12-05 06:33:01.379211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:06.005 [2024-12-05 06:33:01.379342] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70318 ] 00:08:06.005 { 00:08:06.005 "subsystems": [ 00:08:06.005 { 00:08:06.005 "subsystem": "bdev", 00:08:06.005 "config": [ 00:08:06.005 { 00:08:06.005 "params": { 00:08:06.005 "block_size": 512, 00:08:06.005 "num_blocks": 1048576, 00:08:06.005 "name": "malloc0" 00:08:06.005 }, 00:08:06.005 "method": "bdev_malloc_create" 00:08:06.005 }, 00:08:06.005 { 00:08:06.005 "params": { 00:08:06.005 "block_size": 512, 00:08:06.005 "num_blocks": 1048576, 00:08:06.005 "name": "malloc1" 00:08:06.005 }, 00:08:06.005 "method": "bdev_malloc_create" 00:08:06.005 }, 00:08:06.005 { 00:08:06.005 "method": "bdev_wait_for_examine" 00:08:06.005 } 00:08:06.005 ] 00:08:06.005 } 00:08:06.005 ] 00:08:06.005 } 00:08:06.265 [2024-12-05 06:33:01.516499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.265 [2024-12-05 06:33:01.548556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.643  [2024-12-05T06:33:04.044Z] Copying: 227/512 [MB] (227 MBps) [2024-12-05T06:33:04.044Z] Copying: 454/512 [MB] (226 MBps) [2024-12-05T06:33:04.610Z] Copying: 512/512 [MB] (average 227 MBps) 00:08:09.144 00:08:09.144 06:33:04 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:09.144 06:33:04 -- dd/malloc.sh@33 -- # gen_conf 00:08:09.144 06:33:04 -- dd/common.sh@31 -- # xtrace_disable 00:08:09.144 06:33:04 -- common/autotest_common.sh@10 -- # set +x 00:08:09.144 [2024-12-05 06:33:04.379339] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:09.144 [2024-12-05 06:33:04.379439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70360 ] 00:08:09.144 { 00:08:09.144 "subsystems": [ 00:08:09.144 { 00:08:09.144 "subsystem": "bdev", 00:08:09.144 "config": [ 00:08:09.144 { 00:08:09.144 "params": { 00:08:09.144 "block_size": 512, 00:08:09.144 "num_blocks": 1048576, 00:08:09.144 "name": "malloc0" 00:08:09.144 }, 00:08:09.144 "method": "bdev_malloc_create" 00:08:09.144 }, 00:08:09.144 { 00:08:09.144 "params": { 00:08:09.144 "block_size": 512, 00:08:09.144 "num_blocks": 1048576, 00:08:09.144 "name": "malloc1" 00:08:09.144 }, 00:08:09.144 "method": "bdev_malloc_create" 00:08:09.144 }, 00:08:09.144 { 00:08:09.144 "method": "bdev_wait_for_examine" 00:08:09.144 } 00:08:09.144 ] 00:08:09.144 } 00:08:09.144 ] 00:08:09.144 } 00:08:09.144 [2024-12-05 06:33:04.514605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.144 [2024-12-05 06:33:04.547166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.557  [2024-12-05T06:33:06.960Z] Copying: 238/512 [MB] (238 MBps) [2024-12-05T06:33:06.960Z] Copying: 477/512 [MB] (239 MBps) [2024-12-05T06:33:07.219Z] Copying: 512/512 [MB] (average 239 MBps) 00:08:11.753 00:08:11.753 00:08:11.753 real 0m5.877s 00:08:11.753 user 0m5.228s 00:08:11.753 sys 0m0.487s 00:08:11.753 06:33:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.753 06:33:07 -- common/autotest_common.sh@10 -- # set +x 00:08:11.753 ************************************ 00:08:11.753 END TEST dd_malloc_copy 00:08:11.753 ************************************ 00:08:12.014 00:08:12.014 real 0m6.124s 00:08:12.014 user 0m5.366s 00:08:12.014 sys 0m0.599s 00:08:12.014 06:33:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.014 06:33:07 -- common/autotest_common.sh@10 -- # set +x 00:08:12.014 ************************************ 00:08:12.014 END TEST spdk_dd_malloc 00:08:12.014 ************************************ 00:08:12.014 06:33:07 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:12.014 06:33:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:12.014 06:33:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.014 06:33:07 -- common/autotest_common.sh@10 -- # set +x 00:08:12.014 ************************************ 00:08:12.014 START TEST spdk_dd_bdev_to_bdev 00:08:12.014 ************************************ 00:08:12.014 06:33:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:12.014 * Looking for test storage... 00:08:12.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:12.014 06:33:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:12.014 06:33:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:12.014 06:33:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:12.274 06:33:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:12.274 06:33:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:12.274 06:33:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:12.274 06:33:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:12.274 06:33:07 -- scripts/common.sh@335 -- # IFS=.-: 00:08:12.274 06:33:07 -- scripts/common.sh@335 -- # read -ra ver1 00:08:12.274 06:33:07 -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.274 06:33:07 -- scripts/common.sh@336 -- # read -ra ver2 00:08:12.274 06:33:07 -- scripts/common.sh@337 -- # local 'op=<' 00:08:12.274 06:33:07 -- scripts/common.sh@339 -- # ver1_l=2 00:08:12.274 06:33:07 -- scripts/common.sh@340 -- # ver2_l=1 00:08:12.274 06:33:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:12.274 06:33:07 -- scripts/common.sh@343 -- # case "$op" in 00:08:12.274 06:33:07 -- scripts/common.sh@344 -- # : 1 00:08:12.274 06:33:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:12.274 06:33:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.274 06:33:07 -- scripts/common.sh@364 -- # decimal 1 00:08:12.274 06:33:07 -- scripts/common.sh@352 -- # local d=1 00:08:12.274 06:33:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.274 06:33:07 -- scripts/common.sh@354 -- # echo 1 00:08:12.274 06:33:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:12.274 06:33:07 -- scripts/common.sh@365 -- # decimal 2 00:08:12.274 06:33:07 -- scripts/common.sh@352 -- # local d=2 00:08:12.274 06:33:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.274 06:33:07 -- scripts/common.sh@354 -- # echo 2 00:08:12.274 06:33:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:12.274 06:33:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:12.274 06:33:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:12.274 06:33:07 -- scripts/common.sh@367 -- # return 0 00:08:12.274 06:33:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.274 06:33:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:12.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.274 --rc genhtml_branch_coverage=1 00:08:12.274 --rc genhtml_function_coverage=1 00:08:12.274 --rc genhtml_legend=1 00:08:12.274 --rc geninfo_all_blocks=1 00:08:12.274 --rc geninfo_unexecuted_blocks=1 00:08:12.274 00:08:12.274 ' 00:08:12.274 06:33:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:12.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.274 --rc genhtml_branch_coverage=1 00:08:12.274 --rc genhtml_function_coverage=1 00:08:12.274 --rc genhtml_legend=1 00:08:12.274 --rc geninfo_all_blocks=1 00:08:12.274 --rc geninfo_unexecuted_blocks=1 00:08:12.274 00:08:12.274 ' 00:08:12.274 06:33:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:12.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.275 --rc genhtml_branch_coverage=1 00:08:12.275 --rc genhtml_function_coverage=1 00:08:12.275 --rc genhtml_legend=1 00:08:12.275 --rc geninfo_all_blocks=1 00:08:12.275 --rc geninfo_unexecuted_blocks=1 00:08:12.275 00:08:12.275 ' 00:08:12.275 06:33:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:12.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.275 --rc genhtml_branch_coverage=1 00:08:12.275 --rc genhtml_function_coverage=1 00:08:12.275 --rc genhtml_legend=1 00:08:12.275 --rc geninfo_all_blocks=1 00:08:12.275 --rc geninfo_unexecuted_blocks=1 00:08:12.275 00:08:12.275 ' 00:08:12.275 06:33:07 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.275 06:33:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.275 06:33:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.275 06:33:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.275 06:33:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.275 06:33:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.275 06:33:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.275 06:33:07 -- paths/export.sh@5 -- # export PATH 00:08:12.275 06:33:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:12.275 06:33:07 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:12.275 06:33:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:12.275 06:33:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.275 06:33:07 -- common/autotest_common.sh@10 -- # set +x 00:08:12.275 ************************************ 00:08:12.275 START TEST dd_inflate_file 00:08:12.275 ************************************ 00:08:12.275 06:33:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:12.275 [2024-12-05 06:33:07.571086] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:12.275 [2024-12-05 06:33:07.571211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70466 ] 00:08:12.275 [2024-12-05 06:33:07.708492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.534 [2024-12-05 06:33:07.742281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.534  [2024-12-05T06:33:08.000Z] Copying: 64/64 [MB] (average 2285 MBps) 00:08:12.534 00:08:12.534 ************************************ 00:08:12.534 END TEST dd_inflate_file 00:08:12.534 ************************************ 00:08:12.534 00:08:12.534 real 0m0.451s 00:08:12.534 user 0m0.217s 00:08:12.534 sys 0m0.119s 00:08:12.534 06:33:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.534 06:33:07 -- common/autotest_common.sh@10 -- # set +x 00:08:12.795 06:33:08 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:12.795 06:33:08 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:12.795 06:33:08 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:12.795 06:33:08 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:12.795 06:33:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:12.795 06:33:08 -- dd/common.sh@31 -- # xtrace_disable 00:08:12.795 06:33:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.795 06:33:08 -- common/autotest_common.sh@10 -- # set +x 00:08:12.795 06:33:08 -- common/autotest_common.sh@10 -- # set +x 00:08:12.795 ************************************ 00:08:12.795 START TEST dd_copy_to_out_bdev 00:08:12.795 ************************************ 00:08:12.795 06:33:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:12.795 [2024-12-05 06:33:08.077788] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:12.795 [2024-12-05 06:33:08.077913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70494 ] 00:08:12.795 { 00:08:12.795 "subsystems": [ 00:08:12.795 { 00:08:12.795 "subsystem": "bdev", 00:08:12.795 "config": [ 00:08:12.795 { 00:08:12.795 "params": { 00:08:12.795 "trtype": "pcie", 00:08:12.795 "traddr": "0000:00:06.0", 00:08:12.795 "name": "Nvme0" 00:08:12.795 }, 00:08:12.795 "method": "bdev_nvme_attach_controller" 00:08:12.795 }, 00:08:12.795 { 00:08:12.795 "params": { 00:08:12.795 "trtype": "pcie", 00:08:12.795 "traddr": "0000:00:07.0", 00:08:12.795 "name": "Nvme1" 00:08:12.795 }, 00:08:12.795 "method": "bdev_nvme_attach_controller" 00:08:12.795 }, 00:08:12.795 { 00:08:12.795 "method": "bdev_wait_for_examine" 00:08:12.795 } 00:08:12.795 ] 00:08:12.795 } 00:08:12.795 ] 00:08:12.795 } 00:08:12.795 [2024-12-05 06:33:08.210225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.795 [2024-12-05 06:33:08.241683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.180  [2024-12-05T06:33:09.905Z] Copying: 47/64 [MB] (47 MBps) [2024-12-05T06:33:10.164Z] Copying: 64/64 [MB] (average 47 MBps) 00:08:14.698 00:08:14.698 00:08:14.698 real 0m1.911s 00:08:14.698 user 0m1.699s 00:08:14.698 sys 0m0.147s 00:08:14.698 06:33:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.698 ************************************ 00:08:14.698 END TEST dd_copy_to_out_bdev 00:08:14.698 ************************************ 00:08:14.698 06:33:09 -- common/autotest_common.sh@10 -- # set +x 00:08:14.698 06:33:09 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:14.698 06:33:09 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:14.698 06:33:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:14.698 06:33:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.698 06:33:09 -- common/autotest_common.sh@10 -- # set +x 00:08:14.698 ************************************ 00:08:14.698 START TEST dd_offset_magic 00:08:14.698 ************************************ 00:08:14.698 06:33:09 -- common/autotest_common.sh@1114 -- # offset_magic 00:08:14.698 06:33:09 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:14.698 06:33:09 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:14.698 06:33:09 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:14.698 06:33:09 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:14.698 06:33:09 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:14.698 06:33:09 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:14.698 06:33:09 -- dd/common.sh@31 -- # xtrace_disable 00:08:14.698 06:33:10 -- common/autotest_common.sh@10 -- # set +x 00:08:14.698 [2024-12-05 06:33:10.048515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:14.698 [2024-12-05 06:33:10.048648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70536 ] 00:08:14.698 { 00:08:14.698 "subsystems": [ 00:08:14.698 { 00:08:14.698 "subsystem": "bdev", 00:08:14.698 "config": [ 00:08:14.698 { 00:08:14.698 "params": { 00:08:14.698 "trtype": "pcie", 00:08:14.698 "traddr": "0000:00:06.0", 00:08:14.698 "name": "Nvme0" 00:08:14.698 }, 00:08:14.698 "method": "bdev_nvme_attach_controller" 00:08:14.698 }, 00:08:14.698 { 00:08:14.698 "params": { 00:08:14.698 "trtype": "pcie", 00:08:14.698 "traddr": "0000:00:07.0", 00:08:14.698 "name": "Nvme1" 00:08:14.698 }, 00:08:14.698 "method": "bdev_nvme_attach_controller" 00:08:14.698 }, 00:08:14.698 { 00:08:14.698 "method": "bdev_wait_for_examine" 00:08:14.698 } 00:08:14.698 ] 00:08:14.698 } 00:08:14.698 ] 00:08:14.698 } 00:08:14.957 [2024-12-05 06:33:10.186015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.957 [2024-12-05 06:33:10.226768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.215  [2024-12-05T06:33:10.681Z] Copying: 65/65 [MB] (average 812 MBps) 00:08:15.215 00:08:15.215 06:33:10 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:15.215 06:33:10 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:15.215 06:33:10 -- dd/common.sh@31 -- # xtrace_disable 00:08:15.215 06:33:10 -- common/autotest_common.sh@10 -- # set +x 00:08:15.474 [2024-12-05 06:33:10.727523] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:15.474 [2024-12-05 06:33:10.727641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70556 ] 00:08:15.474 { 00:08:15.474 "subsystems": [ 00:08:15.474 { 00:08:15.474 "subsystem": "bdev", 00:08:15.474 "config": [ 00:08:15.474 { 00:08:15.474 "params": { 00:08:15.474 "trtype": "pcie", 00:08:15.474 "traddr": "0000:00:06.0", 00:08:15.474 "name": "Nvme0" 00:08:15.474 }, 00:08:15.474 "method": "bdev_nvme_attach_controller" 00:08:15.474 }, 00:08:15.474 { 00:08:15.474 "params": { 00:08:15.474 "trtype": "pcie", 00:08:15.474 "traddr": "0000:00:07.0", 00:08:15.474 "name": "Nvme1" 00:08:15.474 }, 00:08:15.474 "method": "bdev_nvme_attach_controller" 00:08:15.474 }, 00:08:15.474 { 00:08:15.474 "method": "bdev_wait_for_examine" 00:08:15.474 } 00:08:15.474 ] 00:08:15.474 } 00:08:15.474 ] 00:08:15.474 } 00:08:15.474 [2024-12-05 06:33:10.868942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.474 [2024-12-05 06:33:10.908100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.732  [2024-12-05T06:33:11.457Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:15.991 00:08:15.991 06:33:11 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:15.991 06:33:11 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:15.991 06:33:11 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:15.991 06:33:11 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:15.991 06:33:11 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:15.991 06:33:11 -- dd/common.sh@31 -- # xtrace_disable 00:08:15.991 06:33:11 -- common/autotest_common.sh@10 -- # set +x 00:08:15.992 [2024-12-05 06:33:11.306785] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:15.992 [2024-12-05 06:33:11.306883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70565 ] 00:08:15.992 { 00:08:15.992 "subsystems": [ 00:08:15.992 { 00:08:15.992 "subsystem": "bdev", 00:08:15.992 "config": [ 00:08:15.992 { 00:08:15.992 "params": { 00:08:15.992 "trtype": "pcie", 00:08:15.992 "traddr": "0000:00:06.0", 00:08:15.992 "name": "Nvme0" 00:08:15.992 }, 00:08:15.992 "method": "bdev_nvme_attach_controller" 00:08:15.992 }, 00:08:15.992 { 00:08:15.992 "params": { 00:08:15.992 "trtype": "pcie", 00:08:15.992 "traddr": "0000:00:07.0", 00:08:15.992 "name": "Nvme1" 00:08:15.992 }, 00:08:15.992 "method": "bdev_nvme_attach_controller" 00:08:15.992 }, 00:08:15.992 { 00:08:15.992 "method": "bdev_wait_for_examine" 00:08:15.992 } 00:08:15.992 ] 00:08:15.992 } 00:08:15.992 ] 00:08:15.992 } 00:08:15.992 [2024-12-05 06:33:11.443662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.251 [2024-12-05 06:33:11.483236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.510  [2024-12-05T06:33:11.976Z] Copying: 65/65 [MB] (average 1000 MBps) 00:08:16.510 00:08:16.510 06:33:11 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:16.510 06:33:11 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:16.510 06:33:11 -- dd/common.sh@31 -- # xtrace_disable 00:08:16.510 06:33:11 -- common/autotest_common.sh@10 -- # set +x 00:08:16.510 [2024-12-05 06:33:11.946884] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:16.510 [2024-12-05 06:33:11.946981] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70585 ] 00:08:16.510 { 00:08:16.510 "subsystems": [ 00:08:16.510 { 00:08:16.510 "subsystem": "bdev", 00:08:16.510 "config": [ 00:08:16.510 { 00:08:16.510 "params": { 00:08:16.511 "trtype": "pcie", 00:08:16.511 "traddr": "0000:00:06.0", 00:08:16.511 "name": "Nvme0" 00:08:16.511 }, 00:08:16.511 "method": "bdev_nvme_attach_controller" 00:08:16.511 }, 00:08:16.511 { 00:08:16.511 "params": { 00:08:16.511 "trtype": "pcie", 00:08:16.511 "traddr": "0000:00:07.0", 00:08:16.511 "name": "Nvme1" 00:08:16.511 }, 00:08:16.511 "method": "bdev_nvme_attach_controller" 00:08:16.511 }, 00:08:16.511 { 00:08:16.511 "method": "bdev_wait_for_examine" 00:08:16.511 } 00:08:16.511 ] 00:08:16.511 } 00:08:16.511 ] 00:08:16.539 } 00:08:16.798 [2024-12-05 06:33:12.083948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.798 [2024-12-05 06:33:12.126020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.057  [2024-12-05T06:33:12.523Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:17.057 00:08:17.057 06:33:12 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:17.057 06:33:12 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:17.057 00:08:17.057 real 0m2.490s 00:08:17.057 user 0m1.800s 00:08:17.057 sys 0m0.502s 00:08:17.057 ************************************ 00:08:17.057 END TEST dd_offset_magic 00:08:17.057 ************************************ 00:08:17.057 06:33:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.057 06:33:12 -- common/autotest_common.sh@10 -- # set +x 00:08:17.316 06:33:12 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:17.316 06:33:12 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:17.316 06:33:12 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:17.316 06:33:12 -- dd/common.sh@11 -- # local nvme_ref= 00:08:17.316 06:33:12 -- dd/common.sh@12 -- # local size=4194330 00:08:17.316 06:33:12 -- dd/common.sh@14 -- # local bs=1048576 00:08:17.316 06:33:12 -- dd/common.sh@15 -- # local count=5 00:08:17.316 06:33:12 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:17.316 06:33:12 -- dd/common.sh@18 -- # gen_conf 00:08:17.316 06:33:12 -- dd/common.sh@31 -- # xtrace_disable 00:08:17.316 06:33:12 -- common/autotest_common.sh@10 -- # set +x 00:08:17.316 [2024-12-05 06:33:12.574466] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:17.316 [2024-12-05 06:33:12.574568] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70620 ] 00:08:17.316 { 00:08:17.316 "subsystems": [ 00:08:17.316 { 00:08:17.316 "subsystem": "bdev", 00:08:17.316 "config": [ 00:08:17.316 { 00:08:17.316 "params": { 00:08:17.316 "trtype": "pcie", 00:08:17.316 "traddr": "0000:00:06.0", 00:08:17.316 "name": "Nvme0" 00:08:17.316 }, 00:08:17.316 "method": "bdev_nvme_attach_controller" 00:08:17.316 }, 00:08:17.316 { 00:08:17.316 "params": { 00:08:17.316 "trtype": "pcie", 00:08:17.316 "traddr": "0000:00:07.0", 00:08:17.316 "name": "Nvme1" 00:08:17.316 }, 00:08:17.316 "method": "bdev_nvme_attach_controller" 00:08:17.316 }, 00:08:17.316 { 00:08:17.316 "method": "bdev_wait_for_examine" 00:08:17.316 } 00:08:17.316 ] 00:08:17.316 } 00:08:17.316 ] 00:08:17.316 } 00:08:17.316 [2024-12-05 06:33:12.711405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.316 [2024-12-05 06:33:12.747231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.575  [2024-12-05T06:33:13.300Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:17.834 00:08:17.834 06:33:13 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:17.834 06:33:13 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:17.834 06:33:13 -- dd/common.sh@11 -- # local nvme_ref= 00:08:17.834 06:33:13 -- dd/common.sh@12 -- # local size=4194330 00:08:17.834 06:33:13 -- dd/common.sh@14 -- # local bs=1048576 00:08:17.834 06:33:13 -- dd/common.sh@15 -- # local count=5 00:08:17.834 06:33:13 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:17.834 06:33:13 -- dd/common.sh@18 -- # gen_conf 00:08:17.834 06:33:13 -- dd/common.sh@31 -- # xtrace_disable 00:08:17.834 06:33:13 -- common/autotest_common.sh@10 -- # set +x 00:08:17.834 [2024-12-05 06:33:13.127293] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:17.834 [2024-12-05 06:33:13.127414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70629 ] 00:08:17.834 { 00:08:17.834 "subsystems": [ 00:08:17.834 { 00:08:17.834 "subsystem": "bdev", 00:08:17.834 "config": [ 00:08:17.834 { 00:08:17.834 "params": { 00:08:17.834 "trtype": "pcie", 00:08:17.834 "traddr": "0000:00:06.0", 00:08:17.834 "name": "Nvme0" 00:08:17.834 }, 00:08:17.834 "method": "bdev_nvme_attach_controller" 00:08:17.834 }, 00:08:17.834 { 00:08:17.834 "params": { 00:08:17.834 "trtype": "pcie", 00:08:17.834 "traddr": "0000:00:07.0", 00:08:17.834 "name": "Nvme1" 00:08:17.834 }, 00:08:17.834 "method": "bdev_nvme_attach_controller" 00:08:17.834 }, 00:08:17.834 { 00:08:17.834 "method": "bdev_wait_for_examine" 00:08:17.834 } 00:08:17.834 ] 00:08:17.834 } 00:08:17.834 ] 00:08:17.834 } 00:08:17.834 [2024-12-05 06:33:13.264491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.092 [2024-12-05 06:33:13.304231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.092  [2024-12-05T06:33:13.817Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:18.351 00:08:18.351 06:33:13 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:18.351 00:08:18.351 real 0m6.366s 00:08:18.351 user 0m4.672s 00:08:18.351 sys 0m1.208s 00:08:18.351 06:33:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.351 06:33:13 -- common/autotest_common.sh@10 -- # set +x 00:08:18.351 ************************************ 00:08:18.351 END TEST spdk_dd_bdev_to_bdev 00:08:18.351 ************************************ 00:08:18.351 06:33:13 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:18.351 06:33:13 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:18.351 06:33:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.351 06:33:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.351 06:33:13 -- common/autotest_common.sh@10 -- # set +x 00:08:18.351 ************************************ 00:08:18.351 START TEST spdk_dd_uring 00:08:18.351 ************************************ 00:08:18.351 06:33:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:18.351 * Looking for test storage... 00:08:18.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:18.351 06:33:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:18.351 06:33:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:18.351 06:33:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:18.611 06:33:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:18.611 06:33:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:18.611 06:33:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:18.611 06:33:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:18.611 06:33:13 -- scripts/common.sh@335 -- # IFS=.-: 00:08:18.611 06:33:13 -- scripts/common.sh@335 -- # read -ra ver1 00:08:18.611 06:33:13 -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.611 06:33:13 -- scripts/common.sh@336 -- # read -ra ver2 00:08:18.611 06:33:13 -- scripts/common.sh@337 -- # local 'op=<' 00:08:18.611 06:33:13 -- scripts/common.sh@339 -- # ver1_l=2 00:08:18.611 06:33:13 -- scripts/common.sh@340 -- # ver2_l=1 00:08:18.611 06:33:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:18.611 06:33:13 -- scripts/common.sh@343 -- # case "$op" in 00:08:18.611 06:33:13 -- scripts/common.sh@344 -- # : 1 00:08:18.611 06:33:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:18.611 06:33:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.611 06:33:13 -- scripts/common.sh@364 -- # decimal 1 00:08:18.611 06:33:13 -- scripts/common.sh@352 -- # local d=1 00:08:18.611 06:33:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.611 06:33:13 -- scripts/common.sh@354 -- # echo 1 00:08:18.611 06:33:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:18.611 06:33:13 -- scripts/common.sh@365 -- # decimal 2 00:08:18.611 06:33:13 -- scripts/common.sh@352 -- # local d=2 00:08:18.611 06:33:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.611 06:33:13 -- scripts/common.sh@354 -- # echo 2 00:08:18.611 06:33:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:18.611 06:33:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:18.611 06:33:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:18.611 06:33:13 -- scripts/common.sh@367 -- # return 0 00:08:18.611 06:33:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.611 06:33:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:18.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.611 --rc genhtml_branch_coverage=1 00:08:18.611 --rc genhtml_function_coverage=1 00:08:18.611 --rc genhtml_legend=1 00:08:18.611 --rc geninfo_all_blocks=1 00:08:18.611 --rc geninfo_unexecuted_blocks=1 00:08:18.611 00:08:18.611 ' 00:08:18.611 06:33:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:18.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.611 --rc genhtml_branch_coverage=1 00:08:18.611 --rc genhtml_function_coverage=1 00:08:18.611 --rc genhtml_legend=1 00:08:18.611 --rc geninfo_all_blocks=1 00:08:18.611 --rc geninfo_unexecuted_blocks=1 00:08:18.611 00:08:18.611 ' 00:08:18.611 06:33:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:18.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.611 --rc genhtml_branch_coverage=1 00:08:18.611 --rc genhtml_function_coverage=1 00:08:18.611 --rc genhtml_legend=1 00:08:18.611 --rc geninfo_all_blocks=1 00:08:18.611 --rc geninfo_unexecuted_blocks=1 00:08:18.611 00:08:18.611 ' 00:08:18.611 06:33:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:18.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.611 --rc genhtml_branch_coverage=1 00:08:18.611 --rc genhtml_function_coverage=1 00:08:18.611 --rc genhtml_legend=1 00:08:18.611 --rc geninfo_all_blocks=1 00:08:18.611 --rc geninfo_unexecuted_blocks=1 00:08:18.611 00:08:18.611 ' 00:08:18.611 06:33:13 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.611 06:33:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.611 06:33:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.611 06:33:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.611 06:33:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.611 06:33:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.611 06:33:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.611 06:33:13 -- paths/export.sh@5 -- # export PATH 00:08:18.611 06:33:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.611 06:33:13 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:18.611 06:33:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.611 06:33:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.611 06:33:13 -- common/autotest_common.sh@10 -- # set +x 00:08:18.611 ************************************ 00:08:18.611 START TEST dd_uring_copy 00:08:18.611 ************************************ 00:08:18.611 06:33:13 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:08:18.611 06:33:13 -- dd/uring.sh@15 -- # local zram_dev_id 00:08:18.611 06:33:13 -- dd/uring.sh@16 -- # local magic 00:08:18.611 06:33:13 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:18.611 06:33:13 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:18.611 06:33:13 -- dd/uring.sh@19 -- # local verify_magic 00:08:18.611 06:33:13 -- dd/uring.sh@21 -- # init_zram 00:08:18.611 06:33:13 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:08:18.611 06:33:13 -- dd/common.sh@164 -- # return 00:08:18.611 06:33:13 -- dd/uring.sh@22 -- # create_zram_dev 00:08:18.611 06:33:13 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:08:18.611 06:33:13 -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:18.611 06:33:13 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:18.611 06:33:13 -- dd/common.sh@181 -- # local id=1 00:08:18.611 06:33:13 -- dd/common.sh@182 -- # local size=512M 00:08:18.611 06:33:13 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:08:18.611 06:33:13 -- dd/common.sh@186 -- # echo 512M 00:08:18.611 06:33:13 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:18.612 06:33:13 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:18.612 06:33:13 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:18.612 06:33:13 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:18.612 06:33:13 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:18.612 06:33:13 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:18.612 06:33:13 -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:18.612 06:33:13 -- dd/common.sh@98 -- # xtrace_disable 00:08:18.612 06:33:13 -- common/autotest_common.sh@10 -- # set +x 00:08:18.612 06:33:13 -- dd/uring.sh@41 -- # magic=ixn3x1k2kkco6bnr4l59itn1mwem51dj4wnmq21k24hkwws624hve3nhyvv6lgsu6psgbd7ag8rj7mitkwq95mo3hkt6td2pb5jqaodj2lsh5kxjsrnjpbaprqbqzqqxkptxfw57pt0lrew3lxic0i6xgl55oiv3285xfi12cavq7hx1e735sr2hjb74aijz6tl9pcj3729i5hv1vo7e1h9ti4756kiprpfqzydveksuuqg1n5wv87tnzo78eso7w7ai1yuyucv8znwe5hmra8oclqgtm8qxzs5rau0rda60rcz8xwusl15ugqfc3ddrgo8fnfimifc3lyue5mu8o8gfy6c41lhivnovrnj3t9ps91sg4ruqe6i5j0tyrdpkwy6ux9dp19z32zwttaaax4hlzr5xavszt1ipclb1s7t4um595s6h6rvwabuthb5wi66jygjkyu2je8kkuvssh9e4vd6a9q7r436qrb05tvufxtu57w65078z1k270oavi85iyfry0meb30vdj9rgingcjpyvroj61jcc555telnhdyvcnm1kibdoo69idzpebrsyk5x5c1o7j9asfcrksg46pgsgvo7ugcrdjobspc882f0sev4qkgk6oowipkwwzftfqie7e70z7kd3mrukv0krk2aiqeobykfb2i7pb26u4h4l8go7c4hit8ll6pyuiehtbp3bqtp7qs8e8wmfmk52mq8xyegs81mxgektto8gezrkxwtf8k5p04mh2t1g5cd42bt08ouqpxss8rjisukmhdyamwrpv2r1tbz3ihbn3imk9052af3bgn0a4schdyskku4b3nw0zdfd1qm6k54svz22i81ygsouberabzuk3n1b70h1je483nxxgrmnj4bqcu6l8kubzg3f3w24mdkj440lb8rj1jhs6shqcggs82q31srk5apufgwaau3nho55zab6xq0jotgatepzmvj6caf57s0p0my20kjm314qevcux9u8qkewdk3csw32 00:08:18.612 06:33:13 -- dd/uring.sh@42 -- # echo ixn3x1k2kkco6bnr4l59itn1mwem51dj4wnmq21k24hkwws624hve3nhyvv6lgsu6psgbd7ag8rj7mitkwq95mo3hkt6td2pb5jqaodj2lsh5kxjsrnjpbaprqbqzqqxkptxfw57pt0lrew3lxic0i6xgl55oiv3285xfi12cavq7hx1e735sr2hjb74aijz6tl9pcj3729i5hv1vo7e1h9ti4756kiprpfqzydveksuuqg1n5wv87tnzo78eso7w7ai1yuyucv8znwe5hmra8oclqgtm8qxzs5rau0rda60rcz8xwusl15ugqfc3ddrgo8fnfimifc3lyue5mu8o8gfy6c41lhivnovrnj3t9ps91sg4ruqe6i5j0tyrdpkwy6ux9dp19z32zwttaaax4hlzr5xavszt1ipclb1s7t4um595s6h6rvwabuthb5wi66jygjkyu2je8kkuvssh9e4vd6a9q7r436qrb05tvufxtu57w65078z1k270oavi85iyfry0meb30vdj9rgingcjpyvroj61jcc555telnhdyvcnm1kibdoo69idzpebrsyk5x5c1o7j9asfcrksg46pgsgvo7ugcrdjobspc882f0sev4qkgk6oowipkwwzftfqie7e70z7kd3mrukv0krk2aiqeobykfb2i7pb26u4h4l8go7c4hit8ll6pyuiehtbp3bqtp7qs8e8wmfmk52mq8xyegs81mxgektto8gezrkxwtf8k5p04mh2t1g5cd42bt08ouqpxss8rjisukmhdyamwrpv2r1tbz3ihbn3imk9052af3bgn0a4schdyskku4b3nw0zdfd1qm6k54svz22i81ygsouberabzuk3n1b70h1je483nxxgrmnj4bqcu6l8kubzg3f3w24mdkj440lb8rj1jhs6shqcggs82q31srk5apufgwaau3nho55zab6xq0jotgatepzmvj6caf57s0p0my20kjm314qevcux9u8qkewdk3csw32 00:08:18.612 06:33:13 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:18.612 [2024-12-05 06:33:13.972693] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:18.612 [2024-12-05 06:33:13.972782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70705 ] 00:08:18.871 [2024-12-05 06:33:14.108762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.871 [2024-12-05 06:33:14.146164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.131  [2024-12-05T06:33:14.856Z] Copying: 511/511 [MB] (average 1828 MBps) 00:08:19.390 00:08:19.390 06:33:14 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:19.390 06:33:14 -- dd/uring.sh@54 -- # gen_conf 00:08:19.390 06:33:14 -- dd/common.sh@31 -- # xtrace_disable 00:08:19.390 06:33:14 -- common/autotest_common.sh@10 -- # set +x 00:08:19.390 [2024-12-05 06:33:14.838611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:19.390 [2024-12-05 06:33:14.838707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70719 ] 00:08:19.390 { 00:08:19.390 "subsystems": [ 00:08:19.390 { 00:08:19.390 "subsystem": "bdev", 00:08:19.390 "config": [ 00:08:19.390 { 00:08:19.390 "params": { 00:08:19.390 "block_size": 512, 00:08:19.390 "num_blocks": 1048576, 00:08:19.390 "name": "malloc0" 00:08:19.390 }, 00:08:19.390 "method": "bdev_malloc_create" 00:08:19.390 }, 00:08:19.390 { 00:08:19.390 "params": { 00:08:19.390 "filename": "/dev/zram1", 00:08:19.390 "name": "uring0" 00:08:19.390 }, 00:08:19.390 "method": "bdev_uring_create" 00:08:19.390 }, 00:08:19.390 { 00:08:19.390 "method": "bdev_wait_for_examine" 00:08:19.390 } 00:08:19.390 ] 00:08:19.390 } 00:08:19.390 ] 00:08:19.390 } 00:08:19.649 [2024-12-05 06:33:14.975946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.649 [2024-12-05 06:33:15.023337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.024  [2024-12-05T06:33:17.423Z] Copying: 208/512 [MB] (208 MBps) [2024-12-05T06:33:17.680Z] Copying: 414/512 [MB] (205 MBps) [2024-12-05T06:33:17.939Z] Copying: 512/512 [MB] (average 207 MBps) 00:08:22.473 00:08:22.473 06:33:17 -- dd/uring.sh@60 -- # gen_conf 00:08:22.473 06:33:17 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:22.473 06:33:17 -- dd/common.sh@31 -- # xtrace_disable 00:08:22.473 06:33:17 -- common/autotest_common.sh@10 -- # set +x 00:08:22.732 [2024-12-05 06:33:17.961155] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:22.732 [2024-12-05 06:33:17.961251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70762 ] 00:08:22.732 { 00:08:22.732 "subsystems": [ 00:08:22.732 { 00:08:22.732 "subsystem": "bdev", 00:08:22.732 "config": [ 00:08:22.732 { 00:08:22.732 "params": { 00:08:22.732 "block_size": 512, 00:08:22.732 "num_blocks": 1048576, 00:08:22.732 "name": "malloc0" 00:08:22.732 }, 00:08:22.732 "method": "bdev_malloc_create" 00:08:22.732 }, 00:08:22.732 { 00:08:22.732 "params": { 00:08:22.732 "filename": "/dev/zram1", 00:08:22.732 "name": "uring0" 00:08:22.732 }, 00:08:22.732 "method": "bdev_uring_create" 00:08:22.732 }, 00:08:22.732 { 00:08:22.733 "method": "bdev_wait_for_examine" 00:08:22.733 } 00:08:22.733 ] 00:08:22.733 } 00:08:22.733 ] 00:08:22.733 } 00:08:22.733 [2024-12-05 06:33:18.095856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.733 [2024-12-05 06:33:18.127780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.111  [2024-12-05T06:33:20.512Z] Copying: 150/512 [MB] (150 MBps) [2024-12-05T06:33:21.458Z] Copying: 288/512 [MB] (137 MBps) [2024-12-05T06:33:22.039Z] Copying: 437/512 [MB] (149 MBps) [2024-12-05T06:33:22.298Z] Copying: 512/512 [MB] (average 142 MBps) 00:08:26.832 00:08:26.832 06:33:22 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:26.832 06:33:22 -- dd/uring.sh@66 -- # [[ ixn3x1k2kkco6bnr4l59itn1mwem51dj4wnmq21k24hkwws624hve3nhyvv6lgsu6psgbd7ag8rj7mitkwq95mo3hkt6td2pb5jqaodj2lsh5kxjsrnjpbaprqbqzqqxkptxfw57pt0lrew3lxic0i6xgl55oiv3285xfi12cavq7hx1e735sr2hjb74aijz6tl9pcj3729i5hv1vo7e1h9ti4756kiprpfqzydveksuuqg1n5wv87tnzo78eso7w7ai1yuyucv8znwe5hmra8oclqgtm8qxzs5rau0rda60rcz8xwusl15ugqfc3ddrgo8fnfimifc3lyue5mu8o8gfy6c41lhivnovrnj3t9ps91sg4ruqe6i5j0tyrdpkwy6ux9dp19z32zwttaaax4hlzr5xavszt1ipclb1s7t4um595s6h6rvwabuthb5wi66jygjkyu2je8kkuvssh9e4vd6a9q7r436qrb05tvufxtu57w65078z1k270oavi85iyfry0meb30vdj9rgingcjpyvroj61jcc555telnhdyvcnm1kibdoo69idzpebrsyk5x5c1o7j9asfcrksg46pgsgvo7ugcrdjobspc882f0sev4qkgk6oowipkwwzftfqie7e70z7kd3mrukv0krk2aiqeobykfb2i7pb26u4h4l8go7c4hit8ll6pyuiehtbp3bqtp7qs8e8wmfmk52mq8xyegs81mxgektto8gezrkxwtf8k5p04mh2t1g5cd42bt08ouqpxss8rjisukmhdyamwrpv2r1tbz3ihbn3imk9052af3bgn0a4schdyskku4b3nw0zdfd1qm6k54svz22i81ygsouberabzuk3n1b70h1je483nxxgrmnj4bqcu6l8kubzg3f3w24mdkj440lb8rj1jhs6shqcggs82q31srk5apufgwaau3nho55zab6xq0jotgatepzmvj6caf57s0p0my20kjm314qevcux9u8qkewdk3csw32 == \i\x\n\3\x\1\k\2\k\k\c\o\6\b\n\r\4\l\5\9\i\t\n\1\m\w\e\m\5\1\d\j\4\w\n\m\q\2\1\k\2\4\h\k\w\w\s\6\2\4\h\v\e\3\n\h\y\v\v\6\l\g\s\u\6\p\s\g\b\d\7\a\g\8\r\j\7\m\i\t\k\w\q\9\5\m\o\3\h\k\t\6\t\d\2\p\b\5\j\q\a\o\d\j\2\l\s\h\5\k\x\j\s\r\n\j\p\b\a\p\r\q\b\q\z\q\q\x\k\p\t\x\f\w\5\7\p\t\0\l\r\e\w\3\l\x\i\c\0\i\6\x\g\l\5\5\o\i\v\3\2\8\5\x\f\i\1\2\c\a\v\q\7\h\x\1\e\7\3\5\s\r\2\h\j\b\7\4\a\i\j\z\6\t\l\9\p\c\j\3\7\2\9\i\5\h\v\1\v\o\7\e\1\h\9\t\i\4\7\5\6\k\i\p\r\p\f\q\z\y\d\v\e\k\s\u\u\q\g\1\n\5\w\v\8\7\t\n\z\o\7\8\e\s\o\7\w\7\a\i\1\y\u\y\u\c\v\8\z\n\w\e\5\h\m\r\a\8\o\c\l\q\g\t\m\8\q\x\z\s\5\r\a\u\0\r\d\a\6\0\r\c\z\8\x\w\u\s\l\1\5\u\g\q\f\c\3\d\d\r\g\o\8\f\n\f\i\m\i\f\c\3\l\y\u\e\5\m\u\8\o\8\g\f\y\6\c\4\1\l\h\i\v\n\o\v\r\n\j\3\t\9\p\s\9\1\s\g\4\r\u\q\e\6\i\5\j\0\t\y\r\d\p\k\w\y\6\u\x\9\d\p\1\9\z\3\2\z\w\t\t\a\a\a\x\4\h\l\z\r\5\x\a\v\s\z\t\1\i\p\c\l\b\1\s\7\t\4\u\m\5\9\5\s\6\h\6\r\v\w\a\b\u\t\h\b\5\w\i\6\6\j\y\g\j\k\y\u\2\j\e\8\k\k\u\v\s\s\h\9\e\4\v\d\6\a\9\q\7\r\4\3\6\q\r\b\0\5\t\v\u\f\x\t\u\5\7\w\6\5\0\7\8\z\1\k\2\7\0\o\a\v\i\8\5\i\y\f\r\y\0\m\e\b\3\0\v\d\j\9\r\g\i\n\g\c\j\p\y\v\r\o\j\6\1\j\c\c\5\5\5\t\e\l\n\h\d\y\v\c\n\m\1\k\i\b\d\o\o\6\9\i\d\z\p\e\b\r\s\y\k\5\x\5\c\1\o\7\j\9\a\s\f\c\r\k\s\g\4\6\p\g\s\g\v\o\7\u\g\c\r\d\j\o\b\s\p\c\8\8\2\f\0\s\e\v\4\q\k\g\k\6\o\o\w\i\p\k\w\w\z\f\t\f\q\i\e\7\e\7\0\z\7\k\d\3\m\r\u\k\v\0\k\r\k\2\a\i\q\e\o\b\y\k\f\b\2\i\7\p\b\2\6\u\4\h\4\l\8\g\o\7\c\4\h\i\t\8\l\l\6\p\y\u\i\e\h\t\b\p\3\b\q\t\p\7\q\s\8\e\8\w\m\f\m\k\5\2\m\q\8\x\y\e\g\s\8\1\m\x\g\e\k\t\t\o\8\g\e\z\r\k\x\w\t\f\8\k\5\p\0\4\m\h\2\t\1\g\5\c\d\4\2\b\t\0\8\o\u\q\p\x\s\s\8\r\j\i\s\u\k\m\h\d\y\a\m\w\r\p\v\2\r\1\t\b\z\3\i\h\b\n\3\i\m\k\9\0\5\2\a\f\3\b\g\n\0\a\4\s\c\h\d\y\s\k\k\u\4\b\3\n\w\0\z\d\f\d\1\q\m\6\k\5\4\s\v\z\2\2\i\8\1\y\g\s\o\u\b\e\r\a\b\z\u\k\3\n\1\b\7\0\h\1\j\e\4\8\3\n\x\x\g\r\m\n\j\4\b\q\c\u\6\l\8\k\u\b\z\g\3\f\3\w\2\4\m\d\k\j\4\4\0\l\b\8\r\j\1\j\h\s\6\s\h\q\c\g\g\s\8\2\q\3\1\s\r\k\5\a\p\u\f\g\w\a\a\u\3\n\h\o\5\5\z\a\b\6\x\q\0\j\o\t\g\a\t\e\p\z\m\v\j\6\c\a\f\5\7\s\0\p\0\m\y\2\0\k\j\m\3\1\4\q\e\v\c\u\x\9\u\8\q\k\e\w\d\k\3\c\s\w\3\2 ]] 00:08:26.832 06:33:22 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:26.833 06:33:22 -- dd/uring.sh@69 -- # [[ ixn3x1k2kkco6bnr4l59itn1mwem51dj4wnmq21k24hkwws624hve3nhyvv6lgsu6psgbd7ag8rj7mitkwq95mo3hkt6td2pb5jqaodj2lsh5kxjsrnjpbaprqbqzqqxkptxfw57pt0lrew3lxic0i6xgl55oiv3285xfi12cavq7hx1e735sr2hjb74aijz6tl9pcj3729i5hv1vo7e1h9ti4756kiprpfqzydveksuuqg1n5wv87tnzo78eso7w7ai1yuyucv8znwe5hmra8oclqgtm8qxzs5rau0rda60rcz8xwusl15ugqfc3ddrgo8fnfimifc3lyue5mu8o8gfy6c41lhivnovrnj3t9ps91sg4ruqe6i5j0tyrdpkwy6ux9dp19z32zwttaaax4hlzr5xavszt1ipclb1s7t4um595s6h6rvwabuthb5wi66jygjkyu2je8kkuvssh9e4vd6a9q7r436qrb05tvufxtu57w65078z1k270oavi85iyfry0meb30vdj9rgingcjpyvroj61jcc555telnhdyvcnm1kibdoo69idzpebrsyk5x5c1o7j9asfcrksg46pgsgvo7ugcrdjobspc882f0sev4qkgk6oowipkwwzftfqie7e70z7kd3mrukv0krk2aiqeobykfb2i7pb26u4h4l8go7c4hit8ll6pyuiehtbp3bqtp7qs8e8wmfmk52mq8xyegs81mxgektto8gezrkxwtf8k5p04mh2t1g5cd42bt08ouqpxss8rjisukmhdyamwrpv2r1tbz3ihbn3imk9052af3bgn0a4schdyskku4b3nw0zdfd1qm6k54svz22i81ygsouberabzuk3n1b70h1je483nxxgrmnj4bqcu6l8kubzg3f3w24mdkj440lb8rj1jhs6shqcggs82q31srk5apufgwaau3nho55zab6xq0jotgatepzmvj6caf57s0p0my20kjm314qevcux9u8qkewdk3csw32 == \i\x\n\3\x\1\k\2\k\k\c\o\6\b\n\r\4\l\5\9\i\t\n\1\m\w\e\m\5\1\d\j\4\w\n\m\q\2\1\k\2\4\h\k\w\w\s\6\2\4\h\v\e\3\n\h\y\v\v\6\l\g\s\u\6\p\s\g\b\d\7\a\g\8\r\j\7\m\i\t\k\w\q\9\5\m\o\3\h\k\t\6\t\d\2\p\b\5\j\q\a\o\d\j\2\l\s\h\5\k\x\j\s\r\n\j\p\b\a\p\r\q\b\q\z\q\q\x\k\p\t\x\f\w\5\7\p\t\0\l\r\e\w\3\l\x\i\c\0\i\6\x\g\l\5\5\o\i\v\3\2\8\5\x\f\i\1\2\c\a\v\q\7\h\x\1\e\7\3\5\s\r\2\h\j\b\7\4\a\i\j\z\6\t\l\9\p\c\j\3\7\2\9\i\5\h\v\1\v\o\7\e\1\h\9\t\i\4\7\5\6\k\i\p\r\p\f\q\z\y\d\v\e\k\s\u\u\q\g\1\n\5\w\v\8\7\t\n\z\o\7\8\e\s\o\7\w\7\a\i\1\y\u\y\u\c\v\8\z\n\w\e\5\h\m\r\a\8\o\c\l\q\g\t\m\8\q\x\z\s\5\r\a\u\0\r\d\a\6\0\r\c\z\8\x\w\u\s\l\1\5\u\g\q\f\c\3\d\d\r\g\o\8\f\n\f\i\m\i\f\c\3\l\y\u\e\5\m\u\8\o\8\g\f\y\6\c\4\1\l\h\i\v\n\o\v\r\n\j\3\t\9\p\s\9\1\s\g\4\r\u\q\e\6\i\5\j\0\t\y\r\d\p\k\w\y\6\u\x\9\d\p\1\9\z\3\2\z\w\t\t\a\a\a\x\4\h\l\z\r\5\x\a\v\s\z\t\1\i\p\c\l\b\1\s\7\t\4\u\m\5\9\5\s\6\h\6\r\v\w\a\b\u\t\h\b\5\w\i\6\6\j\y\g\j\k\y\u\2\j\e\8\k\k\u\v\s\s\h\9\e\4\v\d\6\a\9\q\7\r\4\3\6\q\r\b\0\5\t\v\u\f\x\t\u\5\7\w\6\5\0\7\8\z\1\k\2\7\0\o\a\v\i\8\5\i\y\f\r\y\0\m\e\b\3\0\v\d\j\9\r\g\i\n\g\c\j\p\y\v\r\o\j\6\1\j\c\c\5\5\5\t\e\l\n\h\d\y\v\c\n\m\1\k\i\b\d\o\o\6\9\i\d\z\p\e\b\r\s\y\k\5\x\5\c\1\o\7\j\9\a\s\f\c\r\k\s\g\4\6\p\g\s\g\v\o\7\u\g\c\r\d\j\o\b\s\p\c\8\8\2\f\0\s\e\v\4\q\k\g\k\6\o\o\w\i\p\k\w\w\z\f\t\f\q\i\e\7\e\7\0\z\7\k\d\3\m\r\u\k\v\0\k\r\k\2\a\i\q\e\o\b\y\k\f\b\2\i\7\p\b\2\6\u\4\h\4\l\8\g\o\7\c\4\h\i\t\8\l\l\6\p\y\u\i\e\h\t\b\p\3\b\q\t\p\7\q\s\8\e\8\w\m\f\m\k\5\2\m\q\8\x\y\e\g\s\8\1\m\x\g\e\k\t\t\o\8\g\e\z\r\k\x\w\t\f\8\k\5\p\0\4\m\h\2\t\1\g\5\c\d\4\2\b\t\0\8\o\u\q\p\x\s\s\8\r\j\i\s\u\k\m\h\d\y\a\m\w\r\p\v\2\r\1\t\b\z\3\i\h\b\n\3\i\m\k\9\0\5\2\a\f\3\b\g\n\0\a\4\s\c\h\d\y\s\k\k\u\4\b\3\n\w\0\z\d\f\d\1\q\m\6\k\5\4\s\v\z\2\2\i\8\1\y\g\s\o\u\b\e\r\a\b\z\u\k\3\n\1\b\7\0\h\1\j\e\4\8\3\n\x\x\g\r\m\n\j\4\b\q\c\u\6\l\8\k\u\b\z\g\3\f\3\w\2\4\m\d\k\j\4\4\0\l\b\8\r\j\1\j\h\s\6\s\h\q\c\g\g\s\8\2\q\3\1\s\r\k\5\a\p\u\f\g\w\a\a\u\3\n\h\o\5\5\z\a\b\6\x\q\0\j\o\t\g\a\t\e\p\z\m\v\j\6\c\a\f\5\7\s\0\p\0\m\y\2\0\k\j\m\3\1\4\q\e\v\c\u\x\9\u\8\q\k\e\w\d\k\3\c\s\w\3\2 ]] 00:08:26.833 06:33:22 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:27.092 06:33:22 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:27.092 06:33:22 -- dd/uring.sh@75 -- # gen_conf 00:08:27.092 06:33:22 -- dd/common.sh@31 -- # xtrace_disable 00:08:27.092 06:33:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.092 [2024-12-05 06:33:22.484850] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:27.092 [2024-12-05 06:33:22.484958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70842 ] 00:08:27.092 { 00:08:27.092 "subsystems": [ 00:08:27.092 { 00:08:27.092 "subsystem": "bdev", 00:08:27.092 "config": [ 00:08:27.092 { 00:08:27.092 "params": { 00:08:27.092 "block_size": 512, 00:08:27.092 "num_blocks": 1048576, 00:08:27.092 "name": "malloc0" 00:08:27.092 }, 00:08:27.092 "method": "bdev_malloc_create" 00:08:27.092 }, 00:08:27.092 { 00:08:27.092 "params": { 00:08:27.092 "filename": "/dev/zram1", 00:08:27.092 "name": "uring0" 00:08:27.092 }, 00:08:27.092 "method": "bdev_uring_create" 00:08:27.092 }, 00:08:27.092 { 00:08:27.092 "method": "bdev_wait_for_examine" 00:08:27.092 } 00:08:27.092 ] 00:08:27.092 } 00:08:27.092 ] 00:08:27.092 } 00:08:27.351 [2024-12-05 06:33:22.614360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.351 [2024-12-05 06:33:22.645570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.730  [2024-12-05T06:33:25.132Z] Copying: 175/512 [MB] (175 MBps) [2024-12-05T06:33:25.698Z] Copying: 351/512 [MB] (176 MBps) [2024-12-05T06:33:25.957Z] Copying: 512/512 [MB] (average 176 MBps) 00:08:30.491 00:08:30.491 06:33:25 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:30.491 06:33:25 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:30.491 06:33:25 -- dd/uring.sh@87 -- # : 00:08:30.491 06:33:25 -- dd/uring.sh@87 -- # : 00:08:30.491 06:33:25 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:30.491 06:33:25 -- dd/uring.sh@87 -- # gen_conf 00:08:30.491 06:33:25 -- dd/common.sh@31 -- # xtrace_disable 00:08:30.491 06:33:25 -- common/autotest_common.sh@10 -- # set +x 00:08:30.491 [2024-12-05 06:33:25.951717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:30.491 [2024-12-05 06:33:25.951815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70893 ] 00:08:30.750 { 00:08:30.750 "subsystems": [ 00:08:30.750 { 00:08:30.750 "subsystem": "bdev", 00:08:30.750 "config": [ 00:08:30.750 { 00:08:30.750 "params": { 00:08:30.750 "block_size": 512, 00:08:30.750 "num_blocks": 1048576, 00:08:30.750 "name": "malloc0" 00:08:30.750 }, 00:08:30.750 "method": "bdev_malloc_create" 00:08:30.750 }, 00:08:30.750 { 00:08:30.750 "params": { 00:08:30.750 "filename": "/dev/zram1", 00:08:30.750 "name": "uring0" 00:08:30.750 }, 00:08:30.750 "method": "bdev_uring_create" 00:08:30.750 }, 00:08:30.750 { 00:08:30.750 "params": { 00:08:30.750 "name": "uring0" 00:08:30.750 }, 00:08:30.750 "method": "bdev_uring_delete" 00:08:30.750 }, 00:08:30.750 { 00:08:30.750 "method": "bdev_wait_for_examine" 00:08:30.750 } 00:08:30.750 ] 00:08:30.750 } 00:08:30.750 ] 00:08:30.750 } 00:08:30.750 [2024-12-05 06:33:26.087264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.750 [2024-12-05 06:33:26.119607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.010  [2024-12-05T06:33:26.736Z] Copying: 0/0 [B] (average 0 Bps) 00:08:31.270 00:08:31.270 06:33:26 -- dd/uring.sh@94 -- # : 00:08:31.270 06:33:26 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:31.270 06:33:26 -- dd/uring.sh@94 -- # gen_conf 00:08:31.270 06:33:26 -- common/autotest_common.sh@650 -- # local es=0 00:08:31.270 06:33:26 -- dd/common.sh@31 -- # xtrace_disable 00:08:31.270 06:33:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:31.270 06:33:26 -- common/autotest_common.sh@10 -- # set +x 00:08:31.270 06:33:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.270 06:33:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.270 06:33:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.270 06:33:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.270 06:33:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.270 06:33:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.270 06:33:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.270 06:33:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.270 06:33:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:31.270 [2024-12-05 06:33:26.581302] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:31.270 [2024-12-05 06:33:26.581409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70916 ] 00:08:31.270 { 00:08:31.270 "subsystems": [ 00:08:31.270 { 00:08:31.270 "subsystem": "bdev", 00:08:31.270 "config": [ 00:08:31.270 { 00:08:31.270 "params": { 00:08:31.270 "block_size": 512, 00:08:31.270 "num_blocks": 1048576, 00:08:31.270 "name": "malloc0" 00:08:31.270 }, 00:08:31.270 "method": "bdev_malloc_create" 00:08:31.270 }, 00:08:31.270 { 00:08:31.270 "params": { 00:08:31.270 "filename": "/dev/zram1", 00:08:31.270 "name": "uring0" 00:08:31.270 }, 00:08:31.270 "method": "bdev_uring_create" 00:08:31.270 }, 00:08:31.270 { 00:08:31.270 "params": { 00:08:31.270 "name": "uring0" 00:08:31.270 }, 00:08:31.270 "method": "bdev_uring_delete" 00:08:31.270 }, 00:08:31.270 { 00:08:31.270 "method": "bdev_wait_for_examine" 00:08:31.270 } 00:08:31.270 ] 00:08:31.270 } 00:08:31.270 ] 00:08:31.270 } 00:08:31.270 [2024-12-05 06:33:26.708967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.530 [2024-12-05 06:33:26.743002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.530 [2024-12-05 06:33:26.883278] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:31.530 [2024-12-05 06:33:26.883350] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:31.530 [2024-12-05 06:33:26.883361] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:08:31.530 [2024-12-05 06:33:26.883370] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.789 [2024-12-05 06:33:27.061143] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:31.789 06:33:27 -- common/autotest_common.sh@653 -- # es=237 00:08:31.789 06:33:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:31.789 06:33:27 -- common/autotest_common.sh@662 -- # es=109 00:08:31.789 06:33:27 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:31.789 06:33:27 -- common/autotest_common.sh@670 -- # es=1 00:08:31.789 06:33:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:31.789 06:33:27 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:31.789 06:33:27 -- dd/common.sh@172 -- # local id=1 00:08:31.789 06:33:27 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:31.789 06:33:27 -- dd/common.sh@176 -- # echo 1 00:08:31.789 06:33:27 -- dd/common.sh@177 -- # echo 1 00:08:31.789 06:33:27 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:32.048 00:08:32.048 real 0m13.477s 00:08:32.048 user 0m7.659s 00:08:32.048 sys 0m5.177s 00:08:32.048 06:33:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.048 06:33:27 -- common/autotest_common.sh@10 -- # set +x 00:08:32.048 ************************************ 00:08:32.048 END TEST dd_uring_copy 00:08:32.048 ************************************ 00:08:32.048 00:08:32.048 real 0m13.704s 00:08:32.048 user 0m7.783s 00:08:32.048 sys 0m5.284s 00:08:32.048 06:33:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.048 ************************************ 00:08:32.048 END TEST spdk_dd_uring 00:08:32.048 06:33:27 -- common/autotest_common.sh@10 -- # set +x 00:08:32.048 ************************************ 00:08:32.048 06:33:27 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:32.048 06:33:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.048 06:33:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.048 06:33:27 -- common/autotest_common.sh@10 -- # set +x 00:08:32.048 ************************************ 00:08:32.048 START TEST spdk_dd_sparse 00:08:32.048 ************************************ 00:08:32.048 06:33:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:32.308 * Looking for test storage... 00:08:32.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:32.308 06:33:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:32.308 06:33:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:32.308 06:33:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:32.308 06:33:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:32.308 06:33:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:32.308 06:33:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:32.308 06:33:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:32.308 06:33:27 -- scripts/common.sh@335 -- # IFS=.-: 00:08:32.308 06:33:27 -- scripts/common.sh@335 -- # read -ra ver1 00:08:32.308 06:33:27 -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.308 06:33:27 -- scripts/common.sh@336 -- # read -ra ver2 00:08:32.308 06:33:27 -- scripts/common.sh@337 -- # local 'op=<' 00:08:32.308 06:33:27 -- scripts/common.sh@339 -- # ver1_l=2 00:08:32.308 06:33:27 -- scripts/common.sh@340 -- # ver2_l=1 00:08:32.308 06:33:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:32.308 06:33:27 -- scripts/common.sh@343 -- # case "$op" in 00:08:32.308 06:33:27 -- scripts/common.sh@344 -- # : 1 00:08:32.308 06:33:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:32.308 06:33:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.308 06:33:27 -- scripts/common.sh@364 -- # decimal 1 00:08:32.308 06:33:27 -- scripts/common.sh@352 -- # local d=1 00:08:32.308 06:33:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.308 06:33:27 -- scripts/common.sh@354 -- # echo 1 00:08:32.308 06:33:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:32.308 06:33:27 -- scripts/common.sh@365 -- # decimal 2 00:08:32.308 06:33:27 -- scripts/common.sh@352 -- # local d=2 00:08:32.308 06:33:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.308 06:33:27 -- scripts/common.sh@354 -- # echo 2 00:08:32.308 06:33:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:32.308 06:33:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:32.308 06:33:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:32.308 06:33:27 -- scripts/common.sh@367 -- # return 0 00:08:32.308 06:33:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.308 06:33:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:32.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.308 --rc genhtml_branch_coverage=1 00:08:32.308 --rc genhtml_function_coverage=1 00:08:32.308 --rc genhtml_legend=1 00:08:32.308 --rc geninfo_all_blocks=1 00:08:32.308 --rc geninfo_unexecuted_blocks=1 00:08:32.308 00:08:32.308 ' 00:08:32.308 06:33:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:32.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.308 --rc genhtml_branch_coverage=1 00:08:32.308 --rc genhtml_function_coverage=1 00:08:32.308 --rc genhtml_legend=1 00:08:32.308 --rc geninfo_all_blocks=1 00:08:32.308 --rc geninfo_unexecuted_blocks=1 00:08:32.308 00:08:32.308 ' 00:08:32.308 06:33:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:32.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.308 --rc genhtml_branch_coverage=1 00:08:32.308 --rc genhtml_function_coverage=1 00:08:32.308 --rc genhtml_legend=1 00:08:32.308 --rc geninfo_all_blocks=1 00:08:32.308 --rc geninfo_unexecuted_blocks=1 00:08:32.308 00:08:32.308 ' 00:08:32.308 06:33:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:32.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.308 --rc genhtml_branch_coverage=1 00:08:32.308 --rc genhtml_function_coverage=1 00:08:32.308 --rc genhtml_legend=1 00:08:32.308 --rc geninfo_all_blocks=1 00:08:32.308 --rc geninfo_unexecuted_blocks=1 00:08:32.308 00:08:32.308 ' 00:08:32.308 06:33:27 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.308 06:33:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.308 06:33:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.308 06:33:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.308 06:33:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.308 06:33:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.308 06:33:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.308 06:33:27 -- paths/export.sh@5 -- # export PATH 00:08:32.308 06:33:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.308 06:33:27 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:32.308 06:33:27 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:32.308 06:33:27 -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:32.308 06:33:27 -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:32.308 06:33:27 -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:32.308 06:33:27 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:32.308 06:33:27 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:32.308 06:33:27 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:32.308 06:33:27 -- dd/sparse.sh@118 -- # prepare 00:08:32.308 06:33:27 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:32.308 06:33:27 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:32.308 1+0 records in 00:08:32.308 1+0 records out 00:08:32.308 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00650316 s, 645 MB/s 00:08:32.308 06:33:27 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:32.308 1+0 records in 00:08:32.308 1+0 records out 00:08:32.308 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00631652 s, 664 MB/s 00:08:32.308 06:33:27 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:32.308 1+0 records in 00:08:32.308 1+0 records out 00:08:32.308 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00659024 s, 636 MB/s 00:08:32.308 06:33:27 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:32.308 06:33:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.308 06:33:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.308 06:33:27 -- common/autotest_common.sh@10 -- # set +x 00:08:32.308 ************************************ 00:08:32.308 START TEST dd_sparse_file_to_file 00:08:32.308 ************************************ 00:08:32.308 06:33:27 -- common/autotest_common.sh@1114 -- # file_to_file 00:08:32.308 06:33:27 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:32.308 06:33:27 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:32.308 06:33:27 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:32.308 06:33:27 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:32.308 06:33:27 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:32.308 06:33:27 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:32.308 06:33:27 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:32.308 06:33:27 -- dd/sparse.sh@41 -- # gen_conf 00:08:32.308 06:33:27 -- dd/common.sh@31 -- # xtrace_disable 00:08:32.308 06:33:27 -- common/autotest_common.sh@10 -- # set +x 00:08:32.308 [2024-12-05 06:33:27.753673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:32.308 [2024-12-05 06:33:27.753770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71010 ] 00:08:32.308 { 00:08:32.308 "subsystems": [ 00:08:32.309 { 00:08:32.309 "subsystem": "bdev", 00:08:32.309 "config": [ 00:08:32.309 { 00:08:32.309 "params": { 00:08:32.309 "block_size": 4096, 00:08:32.309 "filename": "dd_sparse_aio_disk", 00:08:32.309 "name": "dd_aio" 00:08:32.309 }, 00:08:32.309 "method": "bdev_aio_create" 00:08:32.309 }, 00:08:32.309 { 00:08:32.309 "params": { 00:08:32.309 "lvs_name": "dd_lvstore", 00:08:32.309 "bdev_name": "dd_aio" 00:08:32.309 }, 00:08:32.309 "method": "bdev_lvol_create_lvstore" 00:08:32.309 }, 00:08:32.309 { 00:08:32.309 "method": "bdev_wait_for_examine" 00:08:32.309 } 00:08:32.309 ] 00:08:32.309 } 00:08:32.309 ] 00:08:32.309 } 00:08:32.568 [2024-12-05 06:33:27.891169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.568 [2024-12-05 06:33:27.921972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.568  [2024-12-05T06:33:28.294Z] Copying: 12/36 [MB] (average 2000 MBps) 00:08:32.828 00:08:32.828 06:33:28 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:32.828 06:33:28 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:32.828 06:33:28 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:32.828 06:33:28 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:32.828 06:33:28 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:32.828 06:33:28 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:32.828 06:33:28 -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:32.828 06:33:28 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:32.828 06:33:28 -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:32.828 06:33:28 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:32.828 00:08:32.828 real 0m0.504s 00:08:32.828 user 0m0.283s 00:08:32.828 sys 0m0.133s 00:08:32.828 06:33:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.828 06:33:28 -- common/autotest_common.sh@10 -- # set +x 00:08:32.828 ************************************ 00:08:32.828 END TEST dd_sparse_file_to_file 00:08:32.828 ************************************ 00:08:32.828 06:33:28 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:32.828 06:33:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.828 06:33:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.828 06:33:28 -- common/autotest_common.sh@10 -- # set +x 00:08:32.828 ************************************ 00:08:32.828 START TEST dd_sparse_file_to_bdev 00:08:32.828 ************************************ 00:08:32.828 06:33:28 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:08:32.828 06:33:28 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:32.828 06:33:28 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:32.828 06:33:28 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:08:32.828 06:33:28 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:32.828 06:33:28 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:32.828 06:33:28 -- dd/sparse.sh@73 -- # gen_conf 00:08:32.828 06:33:28 -- dd/common.sh@31 -- # xtrace_disable 00:08:32.828 06:33:28 -- common/autotest_common.sh@10 -- # set +x 00:08:33.088 [2024-12-05 06:33:28.303492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:33.088 [2024-12-05 06:33:28.303575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71051 ] 00:08:33.088 { 00:08:33.088 "subsystems": [ 00:08:33.088 { 00:08:33.088 "subsystem": "bdev", 00:08:33.088 "config": [ 00:08:33.088 { 00:08:33.088 "params": { 00:08:33.088 "block_size": 4096, 00:08:33.088 "filename": "dd_sparse_aio_disk", 00:08:33.088 "name": "dd_aio" 00:08:33.088 }, 00:08:33.088 "method": "bdev_aio_create" 00:08:33.088 }, 00:08:33.088 { 00:08:33.088 "params": { 00:08:33.088 "lvs_name": "dd_lvstore", 00:08:33.088 "lvol_name": "dd_lvol", 00:08:33.088 "size": 37748736, 00:08:33.088 "thin_provision": true 00:08:33.088 }, 00:08:33.088 "method": "bdev_lvol_create" 00:08:33.088 }, 00:08:33.088 { 00:08:33.088 "method": "bdev_wait_for_examine" 00:08:33.088 } 00:08:33.088 ] 00:08:33.088 } 00:08:33.088 ] 00:08:33.088 } 00:08:33.088 [2024-12-05 06:33:28.430039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.088 [2024-12-05 06:33:28.467138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.088 [2024-12-05 06:33:28.528508] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:08:33.348  [2024-12-05T06:33:28.814Z] Copying: 12/36 [MB] (average 521 MBps)[2024-12-05 06:33:28.568043] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:08:33.348 00:08:33.348 00:08:33.348 00:08:33.348 real 0m0.486s 00:08:33.348 user 0m0.292s 00:08:33.348 sys 0m0.122s 00:08:33.348 06:33:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.348 06:33:28 -- common/autotest_common.sh@10 -- # set +x 00:08:33.348 ************************************ 00:08:33.348 END TEST dd_sparse_file_to_bdev 00:08:33.348 ************************************ 00:08:33.348 06:33:28 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:33.348 06:33:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:33.348 06:33:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.348 06:33:28 -- common/autotest_common.sh@10 -- # set +x 00:08:33.348 ************************************ 00:08:33.348 START TEST dd_sparse_bdev_to_file 00:08:33.348 ************************************ 00:08:33.348 06:33:28 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:08:33.348 06:33:28 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:33.348 06:33:28 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:33.348 06:33:28 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:33.348 06:33:28 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:33.348 06:33:28 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:33.348 06:33:28 -- dd/sparse.sh@91 -- # gen_conf 00:08:33.348 06:33:28 -- dd/common.sh@31 -- # xtrace_disable 00:08:33.348 06:33:28 -- common/autotest_common.sh@10 -- # set +x 00:08:33.608 [2024-12-05 06:33:28.845345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:33.608 [2024-12-05 06:33:28.845445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71088 ] 00:08:33.608 { 00:08:33.608 "subsystems": [ 00:08:33.608 { 00:08:33.608 "subsystem": "bdev", 00:08:33.608 "config": [ 00:08:33.608 { 00:08:33.608 "params": { 00:08:33.608 "block_size": 4096, 00:08:33.608 "filename": "dd_sparse_aio_disk", 00:08:33.608 "name": "dd_aio" 00:08:33.608 }, 00:08:33.608 "method": "bdev_aio_create" 00:08:33.608 }, 00:08:33.608 { 00:08:33.608 "method": "bdev_wait_for_examine" 00:08:33.608 } 00:08:33.608 ] 00:08:33.608 } 00:08:33.608 ] 00:08:33.608 } 00:08:33.608 [2024-12-05 06:33:28.982470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.608 [2024-12-05 06:33:29.012889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.867  [2024-12-05T06:33:29.333Z] Copying: 12/36 [MB] (average 1200 MBps) 00:08:33.867 00:08:33.867 06:33:29 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:33.867 06:33:29 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:33.867 06:33:29 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:33.867 06:33:29 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:33.867 06:33:29 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:33.867 06:33:29 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:33.867 06:33:29 -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:33.867 06:33:29 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:33.867 06:33:29 -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:33.867 06:33:29 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:33.867 00:08:33.867 real 0m0.495s 00:08:33.867 user 0m0.283s 00:08:33.867 sys 0m0.130s 00:08:33.867 06:33:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.867 ************************************ 00:08:33.867 END TEST dd_sparse_bdev_to_file 00:08:33.867 ************************************ 00:08:33.867 06:33:29 -- common/autotest_common.sh@10 -- # set +x 00:08:34.127 06:33:29 -- dd/sparse.sh@1 -- # cleanup 00:08:34.127 06:33:29 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:34.127 06:33:29 -- dd/sparse.sh@12 -- # rm file_zero1 00:08:34.127 06:33:29 -- dd/sparse.sh@13 -- # rm file_zero2 00:08:34.127 06:33:29 -- dd/sparse.sh@14 -- # rm file_zero3 00:08:34.127 00:08:34.127 real 0m1.881s 00:08:34.127 user 0m1.034s 00:08:34.127 sys 0m0.595s 00:08:34.127 06:33:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.127 06:33:29 -- common/autotest_common.sh@10 -- # set +x 00:08:34.127 ************************************ 00:08:34.127 END TEST spdk_dd_sparse 00:08:34.127 ************************************ 00:08:34.127 06:33:29 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:34.127 06:33:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.127 06:33:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.127 06:33:29 -- common/autotest_common.sh@10 -- # set +x 00:08:34.127 ************************************ 00:08:34.127 START TEST spdk_dd_negative 00:08:34.127 ************************************ 00:08:34.127 06:33:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:34.127 * Looking for test storage... 00:08:34.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:34.127 06:33:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:34.127 06:33:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:34.127 06:33:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:34.127 06:33:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:34.127 06:33:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:34.127 06:33:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:34.127 06:33:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:34.127 06:33:29 -- scripts/common.sh@335 -- # IFS=.-: 00:08:34.127 06:33:29 -- scripts/common.sh@335 -- # read -ra ver1 00:08:34.127 06:33:29 -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.127 06:33:29 -- scripts/common.sh@336 -- # read -ra ver2 00:08:34.127 06:33:29 -- scripts/common.sh@337 -- # local 'op=<' 00:08:34.127 06:33:29 -- scripts/common.sh@339 -- # ver1_l=2 00:08:34.127 06:33:29 -- scripts/common.sh@340 -- # ver2_l=1 00:08:34.127 06:33:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:34.127 06:33:29 -- scripts/common.sh@343 -- # case "$op" in 00:08:34.127 06:33:29 -- scripts/common.sh@344 -- # : 1 00:08:34.127 06:33:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:34.127 06:33:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.127 06:33:29 -- scripts/common.sh@364 -- # decimal 1 00:08:34.127 06:33:29 -- scripts/common.sh@352 -- # local d=1 00:08:34.127 06:33:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.127 06:33:29 -- scripts/common.sh@354 -- # echo 1 00:08:34.127 06:33:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:34.127 06:33:29 -- scripts/common.sh@365 -- # decimal 2 00:08:34.127 06:33:29 -- scripts/common.sh@352 -- # local d=2 00:08:34.127 06:33:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.127 06:33:29 -- scripts/common.sh@354 -- # echo 2 00:08:34.127 06:33:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:34.127 06:33:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:34.127 06:33:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:34.127 06:33:29 -- scripts/common.sh@367 -- # return 0 00:08:34.127 06:33:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.127 06:33:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:34.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.127 --rc genhtml_branch_coverage=1 00:08:34.127 --rc genhtml_function_coverage=1 00:08:34.127 --rc genhtml_legend=1 00:08:34.127 --rc geninfo_all_blocks=1 00:08:34.127 --rc geninfo_unexecuted_blocks=1 00:08:34.127 00:08:34.127 ' 00:08:34.127 06:33:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:34.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.127 --rc genhtml_branch_coverage=1 00:08:34.127 --rc genhtml_function_coverage=1 00:08:34.127 --rc genhtml_legend=1 00:08:34.127 --rc geninfo_all_blocks=1 00:08:34.127 --rc geninfo_unexecuted_blocks=1 00:08:34.127 00:08:34.127 ' 00:08:34.127 06:33:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:34.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.127 --rc genhtml_branch_coverage=1 00:08:34.127 --rc genhtml_function_coverage=1 00:08:34.127 --rc genhtml_legend=1 00:08:34.127 --rc geninfo_all_blocks=1 00:08:34.127 --rc geninfo_unexecuted_blocks=1 00:08:34.127 00:08:34.127 ' 00:08:34.127 06:33:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:34.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.127 --rc genhtml_branch_coverage=1 00:08:34.127 --rc genhtml_function_coverage=1 00:08:34.127 --rc genhtml_legend=1 00:08:34.127 --rc geninfo_all_blocks=1 00:08:34.127 --rc geninfo_unexecuted_blocks=1 00:08:34.127 00:08:34.127 ' 00:08:34.127 06:33:29 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.127 06:33:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.127 06:33:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.127 06:33:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.127 06:33:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.127 06:33:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.127 06:33:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.127 06:33:29 -- paths/export.sh@5 -- # export PATH 00:08:34.127 06:33:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.387 06:33:29 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:34.387 06:33:29 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:34.387 06:33:29 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:34.387 06:33:29 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:34.387 06:33:29 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:34.387 06:33:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.387 06:33:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.387 06:33:29 -- common/autotest_common.sh@10 -- # set +x 00:08:34.387 ************************************ 00:08:34.387 START TEST dd_invalid_arguments 00:08:34.387 ************************************ 00:08:34.387 06:33:29 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:08:34.387 06:33:29 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:34.387 06:33:29 -- common/autotest_common.sh@650 -- # local es=0 00:08:34.387 06:33:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:34.387 06:33:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.387 06:33:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.387 06:33:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.387 06:33:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.387 06:33:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.387 06:33:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.387 06:33:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.387 06:33:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.387 06:33:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:34.387 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:34.387 options: 00:08:34.387 -c, --config JSON config file (default none) 00:08:34.387 --json JSON config file (default none) 00:08:34.387 --json-ignore-init-errors 00:08:34.387 don't exit on invalid config entry 00:08:34.387 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:34.387 -g, --single-file-segments 00:08:34.387 force creating just one hugetlbfs file 00:08:34.388 -h, --help show this usage 00:08:34.388 -i, --shm-id shared memory ID (optional) 00:08:34.388 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:34.388 --lcores lcore to CPU mapping list. The list is in the format: 00:08:34.388 [<,lcores[@CPUs]>...] 00:08:34.388 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:34.388 Within the group, '-' is used for range separator, 00:08:34.388 ',' is used for single number separator. 00:08:34.388 '( )' can be omitted for single element group, 00:08:34.388 '@' can be omitted if cpus and lcores have the same value 00:08:34.388 -n, --mem-channels channel number of memory channels used for DPDK 00:08:34.388 -p, --main-core main (primary) core for DPDK 00:08:34.388 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:34.388 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:34.388 --disable-cpumask-locks Disable CPU core lock files. 00:08:34.388 --silence-noticelog disable notice level logging to stderr 00:08:34.388 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:34.388 -u, --no-pci disable PCI access 00:08:34.388 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:34.388 --max-delay maximum reactor delay (in microseconds) 00:08:34.388 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:34.388 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:34.388 -R, --huge-unlink unlink huge files after initialization 00:08:34.388 -v, --version print SPDK version 00:08:34.388 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:34.388 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:34.388 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:34.388 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:34.388 Tracepoints vary in size and can use more than one trace entry. 00:08:34.388 --rpcs-allowed comma-separated list of permitted RPCS 00:08:34.388 --env-context Opaque context for use of the env implementation 00:08:34.388 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:34.388 --no-huge run without using hugepages 00:08:34.388 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:34.388 -e, --tpoint-group [:] 00:08:34.388 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:08:34.388 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:34.388 Groups and masks /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:34.388 [2024-12-05 06:33:29.673571] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:08:34.388 can be combined (e.g. thread,bdev:0x1). 00:08:34.388 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:34.388 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:34.388 [--------- DD Options ---------] 00:08:34.388 --if Input file. Must specify either --if or --ib. 00:08:34.388 --ib Input bdev. Must specifier either --if or --ib 00:08:34.388 --of Output file. Must specify either --of or --ob. 00:08:34.388 --ob Output bdev. Must specify either --of or --ob. 00:08:34.388 --iflag Input file flags. 00:08:34.388 --oflag Output file flags. 00:08:34.388 --bs I/O unit size (default: 4096) 00:08:34.388 --qd Queue depth (default: 2) 00:08:34.388 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:34.388 --skip Skip this many I/O units at start of input. (default: 0) 00:08:34.388 --seek Skip this many I/O units at start of output. (default: 0) 00:08:34.388 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:34.388 --sparse Enable hole skipping in input target 00:08:34.388 Available iflag and oflag values: 00:08:34.388 append - append mode 00:08:34.388 direct - use direct I/O for data 00:08:34.388 directory - fail unless a directory 00:08:34.388 dsync - use synchronized I/O for data 00:08:34.388 noatime - do not update access time 00:08:34.388 noctty - do not assign controlling terminal from file 00:08:34.388 nofollow - do not follow symlinks 00:08:34.388 nonblock - use non-blocking I/O 00:08:34.388 sync - use synchronized I/O for data and metadata 00:08:34.388 ************************************ 00:08:34.388 END TEST dd_invalid_arguments 00:08:34.388 ************************************ 00:08:34.388 06:33:29 -- common/autotest_common.sh@653 -- # es=2 00:08:34.388 06:33:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.388 06:33:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:34.388 06:33:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.388 00:08:34.388 real 0m0.095s 00:08:34.388 user 0m0.058s 00:08:34.388 sys 0m0.036s 00:08:34.388 06:33:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.388 06:33:29 -- common/autotest_common.sh@10 -- # set +x 00:08:34.388 06:33:29 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:34.388 06:33:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.388 06:33:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.388 06:33:29 -- common/autotest_common.sh@10 -- # set +x 00:08:34.388 ************************************ 00:08:34.388 START TEST dd_double_input 00:08:34.388 ************************************ 00:08:34.388 06:33:29 -- common/autotest_common.sh@1114 -- # double_input 00:08:34.388 06:33:29 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:34.388 06:33:29 -- common/autotest_common.sh@650 -- # local es=0 00:08:34.388 06:33:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:34.388 06:33:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.388 06:33:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.388 06:33:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.388 06:33:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.388 06:33:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.388 06:33:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.388 06:33:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.388 06:33:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.388 06:33:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:34.388 [2024-12-05 06:33:29.806417] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:34.388 06:33:29 -- common/autotest_common.sh@653 -- # es=22 00:08:34.388 06:33:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.388 06:33:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:34.388 06:33:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.388 00:08:34.388 real 0m0.067s 00:08:34.388 user 0m0.033s 00:08:34.388 sys 0m0.033s 00:08:34.388 06:33:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.388 ************************************ 00:08:34.388 END TEST dd_double_input 00:08:34.388 ************************************ 00:08:34.388 06:33:29 -- common/autotest_common.sh@10 -- # set +x 00:08:34.649 06:33:29 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:34.649 06:33:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.649 06:33:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.649 06:33:29 -- common/autotest_common.sh@10 -- # set +x 00:08:34.649 ************************************ 00:08:34.649 START TEST dd_double_output 00:08:34.649 ************************************ 00:08:34.649 06:33:29 -- common/autotest_common.sh@1114 -- # double_output 00:08:34.649 06:33:29 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:34.649 06:33:29 -- common/autotest_common.sh@650 -- # local es=0 00:08:34.649 06:33:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:34.649 06:33:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.649 06:33:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.649 06:33:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.649 06:33:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.649 06:33:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.649 06:33:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.649 06:33:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.649 06:33:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.649 06:33:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:34.649 [2024-12-05 06:33:29.924083] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:34.649 06:33:29 -- common/autotest_common.sh@653 -- # es=22 00:08:34.649 06:33:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.649 06:33:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:34.649 06:33:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.649 00:08:34.649 real 0m0.066s 00:08:34.649 user 0m0.035s 00:08:34.649 sys 0m0.030s 00:08:34.649 06:33:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.649 06:33:29 -- common/autotest_common.sh@10 -- # set +x 00:08:34.649 ************************************ 00:08:34.649 END TEST dd_double_output 00:08:34.649 ************************************ 00:08:34.649 06:33:29 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:34.649 06:33:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.649 06:33:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.649 06:33:29 -- common/autotest_common.sh@10 -- # set +x 00:08:34.649 ************************************ 00:08:34.649 START TEST dd_no_input 00:08:34.649 ************************************ 00:08:34.649 06:33:29 -- common/autotest_common.sh@1114 -- # no_input 00:08:34.649 06:33:29 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:34.649 06:33:29 -- common/autotest_common.sh@650 -- # local es=0 00:08:34.649 06:33:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:34.649 06:33:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.649 06:33:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.649 06:33:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.649 06:33:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.649 06:33:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.649 06:33:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.649 06:33:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.649 06:33:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.649 06:33:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:34.649 [2024-12-05 06:33:30.041456] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:08:34.649 06:33:30 -- common/autotest_common.sh@653 -- # es=22 00:08:34.649 06:33:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.649 06:33:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:34.649 06:33:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.649 00:08:34.649 real 0m0.066s 00:08:34.649 user 0m0.040s 00:08:34.649 sys 0m0.025s 00:08:34.649 06:33:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.649 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:34.649 ************************************ 00:08:34.649 END TEST dd_no_input 00:08:34.649 ************************************ 00:08:34.649 06:33:30 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:34.649 06:33:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.649 06:33:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.649 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:34.649 ************************************ 00:08:34.649 START TEST dd_no_output 00:08:34.649 ************************************ 00:08:34.649 06:33:30 -- common/autotest_common.sh@1114 -- # no_output 00:08:34.649 06:33:30 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:34.649 06:33:30 -- common/autotest_common.sh@650 -- # local es=0 00:08:34.649 06:33:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:34.649 06:33:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.909 06:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.909 06:33:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.909 06:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.909 06:33:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.909 06:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.909 06:33:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.909 06:33:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.909 06:33:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:34.909 [2024-12-05 06:33:30.160643] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:08:34.909 06:33:30 -- common/autotest_common.sh@653 -- # es=22 00:08:34.909 06:33:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.909 06:33:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:34.909 06:33:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.909 00:08:34.909 real 0m0.065s 00:08:34.909 user 0m0.037s 00:08:34.909 sys 0m0.028s 00:08:34.909 06:33:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.909 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:34.909 ************************************ 00:08:34.909 END TEST dd_no_output 00:08:34.909 ************************************ 00:08:34.909 06:33:30 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:34.909 06:33:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.909 06:33:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.909 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:34.909 ************************************ 00:08:34.909 START TEST dd_wrong_blocksize 00:08:34.909 ************************************ 00:08:34.909 06:33:30 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:08:34.909 06:33:30 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:34.909 06:33:30 -- common/autotest_common.sh@650 -- # local es=0 00:08:34.909 06:33:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:34.909 06:33:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.909 06:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.909 06:33:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.909 06:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.909 06:33:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.909 06:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.909 06:33:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.909 06:33:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.909 06:33:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:34.909 [2024-12-05 06:33:30.278934] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:08:34.909 06:33:30 -- common/autotest_common.sh@653 -- # es=22 00:08:34.909 06:33:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.909 06:33:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:34.909 06:33:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.909 00:08:34.909 real 0m0.065s 00:08:34.909 user 0m0.043s 00:08:34.909 sys 0m0.021s 00:08:34.909 06:33:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.909 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:34.909 ************************************ 00:08:34.909 END TEST dd_wrong_blocksize 00:08:34.909 ************************************ 00:08:34.909 06:33:30 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:34.909 06:33:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.909 06:33:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.909 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:34.909 ************************************ 00:08:34.909 START TEST dd_smaller_blocksize 00:08:34.909 ************************************ 00:08:34.909 06:33:30 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:08:34.909 06:33:30 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:34.909 06:33:30 -- common/autotest_common.sh@650 -- # local es=0 00:08:34.909 06:33:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:34.909 06:33:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.909 06:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.909 06:33:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.909 06:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.909 06:33:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.909 06:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.909 06:33:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.909 06:33:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.909 06:33:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:35.168 [2024-12-05 06:33:30.402275] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:35.168 [2024-12-05 06:33:30.402386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71306 ] 00:08:35.168 [2024-12-05 06:33:30.541732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.168 [2024-12-05 06:33:30.585794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.428 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:35.428 [2024-12-05 06:33:30.645902] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:35.428 [2024-12-05 06:33:30.645935] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.428 [2024-12-05 06:33:30.711379] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:35.428 06:33:30 -- common/autotest_common.sh@653 -- # es=244 00:08:35.428 06:33:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.428 06:33:30 -- common/autotest_common.sh@662 -- # es=116 00:08:35.428 06:33:30 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:35.428 06:33:30 -- common/autotest_common.sh@670 -- # es=1 00:08:35.428 06:33:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.428 00:08:35.428 real 0m0.432s 00:08:35.428 user 0m0.214s 00:08:35.428 sys 0m0.113s 00:08:35.428 06:33:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.428 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:35.428 ************************************ 00:08:35.428 END TEST dd_smaller_blocksize 00:08:35.428 ************************************ 00:08:35.428 06:33:30 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:35.428 06:33:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.428 06:33:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.428 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:35.428 ************************************ 00:08:35.428 START TEST dd_invalid_count 00:08:35.428 ************************************ 00:08:35.428 06:33:30 -- common/autotest_common.sh@1114 -- # invalid_count 00:08:35.428 06:33:30 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:35.428 06:33:30 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.428 06:33:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:35.428 06:33:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.428 06:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.428 06:33:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.428 06:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.428 06:33:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.428 06:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.428 06:33:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.428 06:33:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.428 06:33:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:35.429 [2024-12-05 06:33:30.882297] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:08:35.688 06:33:30 -- common/autotest_common.sh@653 -- # es=22 00:08:35.688 06:33:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.688 06:33:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:35.688 06:33:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.688 00:08:35.688 real 0m0.067s 00:08:35.688 user 0m0.037s 00:08:35.688 sys 0m0.029s 00:08:35.688 06:33:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.688 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:35.688 ************************************ 00:08:35.688 END TEST dd_invalid_count 00:08:35.688 ************************************ 00:08:35.688 06:33:30 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:35.688 06:33:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.688 06:33:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.688 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:35.688 ************************************ 00:08:35.688 START TEST dd_invalid_oflag 00:08:35.688 ************************************ 00:08:35.688 06:33:30 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:08:35.688 06:33:30 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:35.688 06:33:30 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.688 06:33:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:35.688 06:33:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.688 06:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.688 06:33:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.688 06:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.688 06:33:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.688 06:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.688 06:33:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.688 06:33:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.688 06:33:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:35.688 [2024-12-05 06:33:31.002725] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:08:35.688 06:33:31 -- common/autotest_common.sh@653 -- # es=22 00:08:35.688 06:33:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.688 06:33:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:35.688 06:33:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.688 00:08:35.688 real 0m0.064s 00:08:35.688 user 0m0.039s 00:08:35.688 sys 0m0.025s 00:08:35.688 06:33:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.688 06:33:31 -- common/autotest_common.sh@10 -- # set +x 00:08:35.688 ************************************ 00:08:35.688 END TEST dd_invalid_oflag 00:08:35.688 ************************************ 00:08:35.688 06:33:31 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:35.688 06:33:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.688 06:33:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.688 06:33:31 -- common/autotest_common.sh@10 -- # set +x 00:08:35.688 ************************************ 00:08:35.688 START TEST dd_invalid_iflag 00:08:35.688 ************************************ 00:08:35.688 06:33:31 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:08:35.688 06:33:31 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:35.688 06:33:31 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.688 06:33:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:35.688 06:33:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.688 06:33:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.688 06:33:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.688 06:33:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.688 06:33:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.688 06:33:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.688 06:33:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.688 06:33:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.688 06:33:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:35.688 [2024-12-05 06:33:31.120943] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:08:35.688 06:33:31 -- common/autotest_common.sh@653 -- # es=22 00:08:35.688 06:33:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.688 06:33:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:35.688 06:33:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.688 00:08:35.688 real 0m0.065s 00:08:35.688 user 0m0.036s 00:08:35.688 sys 0m0.028s 00:08:35.688 06:33:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.688 06:33:31 -- common/autotest_common.sh@10 -- # set +x 00:08:35.688 ************************************ 00:08:35.688 END TEST dd_invalid_iflag 00:08:35.688 ************************************ 00:08:35.948 06:33:31 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:35.948 06:33:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.948 06:33:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.948 06:33:31 -- common/autotest_common.sh@10 -- # set +x 00:08:35.948 ************************************ 00:08:35.948 START TEST dd_unknown_flag 00:08:35.948 ************************************ 00:08:35.948 06:33:31 -- common/autotest_common.sh@1114 -- # unknown_flag 00:08:35.948 06:33:31 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:35.948 06:33:31 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.948 06:33:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:35.948 06:33:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.948 06:33:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.948 06:33:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.948 06:33:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.948 06:33:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.948 06:33:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.948 06:33:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.948 06:33:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.948 06:33:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:35.948 [2024-12-05 06:33:31.238989] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:35.948 [2024-12-05 06:33:31.239075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71398 ] 00:08:35.948 [2024-12-05 06:33:31.378144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.208 [2024-12-05 06:33:31.418144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.208 [2024-12-05 06:33:31.469379] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:08:36.208 [2024-12-05 06:33:31.469459] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:36.208 [2024-12-05 06:33:31.469475] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:36.208 [2024-12-05 06:33:31.469489] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.208 [2024-12-05 06:33:31.533829] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:36.208 06:33:31 -- common/autotest_common.sh@653 -- # es=236 00:08:36.208 06:33:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.208 06:33:31 -- common/autotest_common.sh@662 -- # es=108 00:08:36.208 06:33:31 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:36.208 06:33:31 -- common/autotest_common.sh@670 -- # es=1 00:08:36.208 06:33:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.208 00:08:36.208 real 0m0.414s 00:08:36.208 user 0m0.210s 00:08:36.208 sys 0m0.100s 00:08:36.208 ************************************ 00:08:36.208 END TEST dd_unknown_flag 00:08:36.208 ************************************ 00:08:36.208 06:33:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.208 06:33:31 -- common/autotest_common.sh@10 -- # set +x 00:08:36.208 06:33:31 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:36.208 06:33:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:36.208 06:33:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.208 06:33:31 -- common/autotest_common.sh@10 -- # set +x 00:08:36.208 ************************************ 00:08:36.208 START TEST dd_invalid_json 00:08:36.208 ************************************ 00:08:36.208 06:33:31 -- common/autotest_common.sh@1114 -- # invalid_json 00:08:36.208 06:33:31 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:36.208 06:33:31 -- common/autotest_common.sh@650 -- # local es=0 00:08:36.208 06:33:31 -- dd/negative_dd.sh@95 -- # : 00:08:36.208 06:33:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:36.208 06:33:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.208 06:33:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.208 06:33:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.208 06:33:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.208 06:33:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.208 06:33:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.208 06:33:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.208 06:33:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:36.208 06:33:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:36.467 [2024-12-05 06:33:31.711284] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:36.467 [2024-12-05 06:33:31.711406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71425 ] 00:08:36.467 [2024-12-05 06:33:31.849666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.467 [2024-12-05 06:33:31.889733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.467 [2024-12-05 06:33:31.889872] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:08:36.467 [2024-12-05 06:33:31.889895] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.467 [2024-12-05 06:33:31.889941] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:36.726 06:33:31 -- common/autotest_common.sh@653 -- # es=234 00:08:36.726 06:33:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.726 06:33:31 -- common/autotest_common.sh@662 -- # es=106 00:08:36.726 06:33:31 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:36.726 06:33:31 -- common/autotest_common.sh@670 -- # es=1 00:08:36.726 06:33:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.726 00:08:36.726 real 0m0.308s 00:08:36.726 user 0m0.143s 00:08:36.726 sys 0m0.064s 00:08:36.726 06:33:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.726 ************************************ 00:08:36.726 END TEST dd_invalid_json 00:08:36.726 ************************************ 00:08:36.726 06:33:31 -- common/autotest_common.sh@10 -- # set +x 00:08:36.726 00:08:36.726 real 0m2.604s 00:08:36.726 user 0m1.254s 00:08:36.726 sys 0m0.973s 00:08:36.726 06:33:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.726 ************************************ 00:08:36.726 END TEST spdk_dd_negative 00:08:36.726 ************************************ 00:08:36.726 06:33:32 -- common/autotest_common.sh@10 -- # set +x 00:08:36.726 00:08:36.726 real 1m0.794s 00:08:36.726 user 0m36.465s 00:08:36.726 sys 0m15.146s 00:08:36.726 06:33:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.726 ************************************ 00:08:36.726 END TEST spdk_dd 00:08:36.726 ************************************ 00:08:36.726 06:33:32 -- common/autotest_common.sh@10 -- # set +x 00:08:36.726 06:33:32 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:08:36.726 06:33:32 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:08:36.726 06:33:32 -- spdk/autotest.sh@255 -- # timing_exit lib 00:08:36.726 06:33:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:36.726 06:33:32 -- common/autotest_common.sh@10 -- # set +x 00:08:36.726 06:33:32 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:08:36.726 06:33:32 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:08:36.726 06:33:32 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:08:36.726 06:33:32 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:08:36.726 06:33:32 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:08:36.726 06:33:32 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:08:36.726 06:33:32 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:36.726 06:33:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:36.726 06:33:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.726 06:33:32 -- common/autotest_common.sh@10 -- # set +x 00:08:36.726 ************************************ 00:08:36.726 START TEST nvmf_tcp 00:08:36.726 ************************************ 00:08:36.726 06:33:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:36.986 * Looking for test storage... 00:08:36.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:36.986 06:33:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:36.986 06:33:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:36.986 06:33:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:36.986 06:33:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:36.986 06:33:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:36.986 06:33:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:36.986 06:33:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:36.986 06:33:32 -- scripts/common.sh@335 -- # IFS=.-: 00:08:36.986 06:33:32 -- scripts/common.sh@335 -- # read -ra ver1 00:08:36.986 06:33:32 -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.986 06:33:32 -- scripts/common.sh@336 -- # read -ra ver2 00:08:36.986 06:33:32 -- scripts/common.sh@337 -- # local 'op=<' 00:08:36.986 06:33:32 -- scripts/common.sh@339 -- # ver1_l=2 00:08:36.986 06:33:32 -- scripts/common.sh@340 -- # ver2_l=1 00:08:36.986 06:33:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:36.986 06:33:32 -- scripts/common.sh@343 -- # case "$op" in 00:08:36.986 06:33:32 -- scripts/common.sh@344 -- # : 1 00:08:36.986 06:33:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:36.986 06:33:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.986 06:33:32 -- scripts/common.sh@364 -- # decimal 1 00:08:36.986 06:33:32 -- scripts/common.sh@352 -- # local d=1 00:08:36.986 06:33:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.986 06:33:32 -- scripts/common.sh@354 -- # echo 1 00:08:36.986 06:33:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:36.986 06:33:32 -- scripts/common.sh@365 -- # decimal 2 00:08:36.986 06:33:32 -- scripts/common.sh@352 -- # local d=2 00:08:36.986 06:33:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.986 06:33:32 -- scripts/common.sh@354 -- # echo 2 00:08:36.986 06:33:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:36.986 06:33:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:36.986 06:33:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:36.986 06:33:32 -- scripts/common.sh@367 -- # return 0 00:08:36.986 06:33:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.986 06:33:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:36.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.986 --rc genhtml_branch_coverage=1 00:08:36.986 --rc genhtml_function_coverage=1 00:08:36.986 --rc genhtml_legend=1 00:08:36.986 --rc geninfo_all_blocks=1 00:08:36.986 --rc geninfo_unexecuted_blocks=1 00:08:36.986 00:08:36.986 ' 00:08:36.986 06:33:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:36.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.986 --rc genhtml_branch_coverage=1 00:08:36.986 --rc genhtml_function_coverage=1 00:08:36.986 --rc genhtml_legend=1 00:08:36.986 --rc geninfo_all_blocks=1 00:08:36.986 --rc geninfo_unexecuted_blocks=1 00:08:36.986 00:08:36.986 ' 00:08:36.986 06:33:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:36.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.986 --rc genhtml_branch_coverage=1 00:08:36.986 --rc genhtml_function_coverage=1 00:08:36.986 --rc genhtml_legend=1 00:08:36.986 --rc geninfo_all_blocks=1 00:08:36.986 --rc geninfo_unexecuted_blocks=1 00:08:36.986 00:08:36.986 ' 00:08:36.986 06:33:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:36.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.986 --rc genhtml_branch_coverage=1 00:08:36.986 --rc genhtml_function_coverage=1 00:08:36.986 --rc genhtml_legend=1 00:08:36.986 --rc geninfo_all_blocks=1 00:08:36.986 --rc geninfo_unexecuted_blocks=1 00:08:36.986 00:08:36.986 ' 00:08:36.986 06:33:32 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:36.986 06:33:32 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:36.986 06:33:32 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:36.986 06:33:32 -- nvmf/common.sh@7 -- # uname -s 00:08:36.986 06:33:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.986 06:33:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.986 06:33:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.986 06:33:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.986 06:33:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.986 06:33:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.986 06:33:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.986 06:33:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.986 06:33:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.986 06:33:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.986 06:33:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:08:36.986 06:33:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:08:36.986 06:33:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.986 06:33:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.986 06:33:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:36.986 06:33:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.986 06:33:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.986 06:33:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.986 06:33:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.986 06:33:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.987 06:33:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.987 06:33:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.987 06:33:32 -- paths/export.sh@5 -- # export PATH 00:08:36.987 06:33:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.987 06:33:32 -- nvmf/common.sh@46 -- # : 0 00:08:36.987 06:33:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:36.987 06:33:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:36.987 06:33:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:36.987 06:33:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.987 06:33:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.987 06:33:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:36.987 06:33:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:36.987 06:33:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:36.987 06:33:32 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:36.987 06:33:32 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:36.987 06:33:32 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:36.987 06:33:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:36.987 06:33:32 -- common/autotest_common.sh@10 -- # set +x 00:08:36.987 06:33:32 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:36.987 06:33:32 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:36.987 06:33:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:36.987 06:33:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.987 06:33:32 -- common/autotest_common.sh@10 -- # set +x 00:08:36.987 ************************************ 00:08:36.987 START TEST nvmf_host_management 00:08:36.987 ************************************ 00:08:36.987 06:33:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:36.987 * Looking for test storage... 00:08:37.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:37.246 06:33:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:37.246 06:33:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:37.246 06:33:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:37.246 06:33:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:37.246 06:33:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:37.246 06:33:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:37.246 06:33:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:37.246 06:33:32 -- scripts/common.sh@335 -- # IFS=.-: 00:08:37.246 06:33:32 -- scripts/common.sh@335 -- # read -ra ver1 00:08:37.246 06:33:32 -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.246 06:33:32 -- scripts/common.sh@336 -- # read -ra ver2 00:08:37.246 06:33:32 -- scripts/common.sh@337 -- # local 'op=<' 00:08:37.246 06:33:32 -- scripts/common.sh@339 -- # ver1_l=2 00:08:37.246 06:33:32 -- scripts/common.sh@340 -- # ver2_l=1 00:08:37.246 06:33:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:37.246 06:33:32 -- scripts/common.sh@343 -- # case "$op" in 00:08:37.246 06:33:32 -- scripts/common.sh@344 -- # : 1 00:08:37.246 06:33:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:37.246 06:33:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.246 06:33:32 -- scripts/common.sh@364 -- # decimal 1 00:08:37.246 06:33:32 -- scripts/common.sh@352 -- # local d=1 00:08:37.246 06:33:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.246 06:33:32 -- scripts/common.sh@354 -- # echo 1 00:08:37.246 06:33:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:37.246 06:33:32 -- scripts/common.sh@365 -- # decimal 2 00:08:37.246 06:33:32 -- scripts/common.sh@352 -- # local d=2 00:08:37.246 06:33:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.246 06:33:32 -- scripts/common.sh@354 -- # echo 2 00:08:37.246 06:33:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:37.246 06:33:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:37.246 06:33:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:37.246 06:33:32 -- scripts/common.sh@367 -- # return 0 00:08:37.246 06:33:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.246 06:33:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:37.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.246 --rc genhtml_branch_coverage=1 00:08:37.246 --rc genhtml_function_coverage=1 00:08:37.246 --rc genhtml_legend=1 00:08:37.246 --rc geninfo_all_blocks=1 00:08:37.246 --rc geninfo_unexecuted_blocks=1 00:08:37.246 00:08:37.246 ' 00:08:37.246 06:33:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:37.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.246 --rc genhtml_branch_coverage=1 00:08:37.246 --rc genhtml_function_coverage=1 00:08:37.246 --rc genhtml_legend=1 00:08:37.246 --rc geninfo_all_blocks=1 00:08:37.246 --rc geninfo_unexecuted_blocks=1 00:08:37.246 00:08:37.246 ' 00:08:37.246 06:33:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:37.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.246 --rc genhtml_branch_coverage=1 00:08:37.246 --rc genhtml_function_coverage=1 00:08:37.246 --rc genhtml_legend=1 00:08:37.246 --rc geninfo_all_blocks=1 00:08:37.246 --rc geninfo_unexecuted_blocks=1 00:08:37.246 00:08:37.246 ' 00:08:37.246 06:33:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:37.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.246 --rc genhtml_branch_coverage=1 00:08:37.246 --rc genhtml_function_coverage=1 00:08:37.246 --rc genhtml_legend=1 00:08:37.246 --rc geninfo_all_blocks=1 00:08:37.246 --rc geninfo_unexecuted_blocks=1 00:08:37.246 00:08:37.247 ' 00:08:37.247 06:33:32 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:37.247 06:33:32 -- nvmf/common.sh@7 -- # uname -s 00:08:37.247 06:33:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.247 06:33:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.247 06:33:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.247 06:33:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.247 06:33:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.247 06:33:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.247 06:33:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.247 06:33:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.247 06:33:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.247 06:33:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.247 06:33:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:08:37.247 06:33:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:08:37.247 06:33:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.247 06:33:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.247 06:33:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:37.247 06:33:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:37.247 06:33:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.247 06:33:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.247 06:33:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.247 06:33:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.247 06:33:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.247 06:33:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.247 06:33:32 -- paths/export.sh@5 -- # export PATH 00:08:37.247 06:33:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.247 06:33:32 -- nvmf/common.sh@46 -- # : 0 00:08:37.247 06:33:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:37.247 06:33:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:37.247 06:33:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:37.247 06:33:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.247 06:33:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.247 06:33:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:37.247 06:33:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:37.247 06:33:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:37.247 06:33:32 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.247 06:33:32 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.247 06:33:32 -- target/host_management.sh@104 -- # nvmftestinit 00:08:37.247 06:33:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:37.247 06:33:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.247 06:33:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:37.247 06:33:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:37.247 06:33:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:37.247 06:33:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.247 06:33:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.247 06:33:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.247 06:33:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:37.247 06:33:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:37.247 06:33:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:37.247 06:33:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:37.247 06:33:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:37.247 06:33:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:37.247 06:33:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.247 06:33:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.247 06:33:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:37.247 06:33:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:37.247 06:33:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:37.247 06:33:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:37.247 06:33:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:37.247 06:33:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.247 06:33:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:37.247 06:33:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:37.247 06:33:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:37.247 06:33:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:37.247 06:33:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:37.247 Cannot find device "nvmf_init_br" 00:08:37.247 06:33:32 -- nvmf/common.sh@153 -- # true 00:08:37.247 06:33:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:37.247 Cannot find device "nvmf_tgt_br" 00:08:37.247 06:33:32 -- nvmf/common.sh@154 -- # true 00:08:37.247 06:33:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:37.247 Cannot find device "nvmf_tgt_br2" 00:08:37.247 06:33:32 -- nvmf/common.sh@155 -- # true 00:08:37.247 06:33:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:37.247 Cannot find device "nvmf_init_br" 00:08:37.247 06:33:32 -- nvmf/common.sh@156 -- # true 00:08:37.247 06:33:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:37.247 Cannot find device "nvmf_tgt_br" 00:08:37.247 06:33:32 -- nvmf/common.sh@157 -- # true 00:08:37.247 06:33:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:37.247 Cannot find device "nvmf_tgt_br2" 00:08:37.247 06:33:32 -- nvmf/common.sh@158 -- # true 00:08:37.247 06:33:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:37.247 Cannot find device "nvmf_br" 00:08:37.247 06:33:32 -- nvmf/common.sh@159 -- # true 00:08:37.247 06:33:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:37.247 Cannot find device "nvmf_init_if" 00:08:37.247 06:33:32 -- nvmf/common.sh@160 -- # true 00:08:37.247 06:33:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:37.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.247 06:33:32 -- nvmf/common.sh@161 -- # true 00:08:37.247 06:33:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:37.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.247 06:33:32 -- nvmf/common.sh@162 -- # true 00:08:37.247 06:33:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:37.247 06:33:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:37.507 06:33:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:37.507 06:33:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:37.507 06:33:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:37.507 06:33:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:37.507 06:33:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:37.507 06:33:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:37.507 06:33:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:37.507 06:33:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:37.507 06:33:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:37.507 06:33:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:37.507 06:33:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:37.507 06:33:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:37.507 06:33:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:37.507 06:33:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:37.507 06:33:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:37.507 06:33:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:37.507 06:33:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:37.507 06:33:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:37.507 06:33:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:37.507 06:33:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:37.507 06:33:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:37.507 06:33:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:37.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:08:37.767 00:08:37.767 --- 10.0.0.2 ping statistics --- 00:08:37.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.767 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:08:37.767 06:33:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:37.767 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:37.767 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:08:37.767 00:08:37.767 --- 10.0.0.3 ping statistics --- 00:08:37.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.767 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:37.767 06:33:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:37.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:08:37.767 00:08:37.767 --- 10.0.0.1 ping statistics --- 00:08:37.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.767 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:37.767 06:33:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.767 06:33:32 -- nvmf/common.sh@421 -- # return 0 00:08:37.767 06:33:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:37.767 06:33:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.767 06:33:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:37.767 06:33:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:37.767 06:33:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.767 06:33:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:37.767 06:33:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:37.767 06:33:33 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:08:37.767 06:33:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:37.767 06:33:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.767 06:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:37.767 ************************************ 00:08:37.767 START TEST nvmf_host_management 00:08:37.767 ************************************ 00:08:37.767 06:33:33 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:08:37.767 06:33:33 -- target/host_management.sh@69 -- # starttarget 00:08:37.767 06:33:33 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:37.767 06:33:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:37.767 06:33:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:37.767 06:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:37.767 06:33:33 -- nvmf/common.sh@469 -- # nvmfpid=71701 00:08:37.767 06:33:33 -- nvmf/common.sh@470 -- # waitforlisten 71701 00:08:37.767 06:33:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:37.767 06:33:33 -- common/autotest_common.sh@829 -- # '[' -z 71701 ']' 00:08:37.767 06:33:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.767 06:33:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.767 06:33:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.767 06:33:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.767 06:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:37.767 [2024-12-05 06:33:33.088602] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:37.767 [2024-12-05 06:33:33.088722] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.767 [2024-12-05 06:33:33.230585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.026 [2024-12-05 06:33:33.272043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:38.026 [2024-12-05 06:33:33.272231] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.026 [2024-12-05 06:33:33.272246] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.026 [2024-12-05 06:33:33.272257] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.026 [2024-12-05 06:33:33.272797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.026 [2024-12-05 06:33:33.272989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.026 [2024-12-05 06:33:33.273123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:38.026 [2024-12-05 06:33:33.273128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.958 06:33:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.958 06:33:34 -- common/autotest_common.sh@862 -- # return 0 00:08:38.958 06:33:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:38.958 06:33:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:38.958 06:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:38.958 06:33:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.958 06:33:34 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.958 06:33:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.958 06:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:38.958 [2024-12-05 06:33:34.129100] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.958 06:33:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.958 06:33:34 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:38.958 06:33:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.958 06:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:38.958 06:33:34 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:38.958 06:33:34 -- target/host_management.sh@23 -- # cat 00:08:38.958 06:33:34 -- target/host_management.sh@30 -- # rpc_cmd 00:08:38.958 06:33:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.958 06:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:38.958 Malloc0 00:08:38.958 [2024-12-05 06:33:34.196617] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.958 06:33:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.958 06:33:34 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:38.958 06:33:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:38.958 06:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:38.959 06:33:34 -- target/host_management.sh@73 -- # perfpid=71755 00:08:38.959 06:33:34 -- target/host_management.sh@74 -- # waitforlisten 71755 /var/tmp/bdevperf.sock 00:08:38.959 06:33:34 -- common/autotest_common.sh@829 -- # '[' -z 71755 ']' 00:08:38.959 06:33:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:38.959 06:33:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.959 06:33:34 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:38.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:38.959 06:33:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:38.959 06:33:34 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:38.959 06:33:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.959 06:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:38.959 06:33:34 -- nvmf/common.sh@520 -- # config=() 00:08:38.959 06:33:34 -- nvmf/common.sh@520 -- # local subsystem config 00:08:38.959 06:33:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:38.959 06:33:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:38.959 { 00:08:38.959 "params": { 00:08:38.959 "name": "Nvme$subsystem", 00:08:38.959 "trtype": "$TEST_TRANSPORT", 00:08:38.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:38.959 "adrfam": "ipv4", 00:08:38.959 "trsvcid": "$NVMF_PORT", 00:08:38.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:38.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:38.959 "hdgst": ${hdgst:-false}, 00:08:38.959 "ddgst": ${ddgst:-false} 00:08:38.959 }, 00:08:38.959 "method": "bdev_nvme_attach_controller" 00:08:38.959 } 00:08:38.959 EOF 00:08:38.959 )") 00:08:38.959 06:33:34 -- nvmf/common.sh@542 -- # cat 00:08:38.959 06:33:34 -- nvmf/common.sh@544 -- # jq . 00:08:38.959 06:33:34 -- nvmf/common.sh@545 -- # IFS=, 00:08:38.959 06:33:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:38.959 "params": { 00:08:38.959 "name": "Nvme0", 00:08:38.959 "trtype": "tcp", 00:08:38.959 "traddr": "10.0.0.2", 00:08:38.959 "adrfam": "ipv4", 00:08:38.959 "trsvcid": "4420", 00:08:38.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:38.959 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:38.959 "hdgst": false, 00:08:38.959 "ddgst": false 00:08:38.959 }, 00:08:38.959 "method": "bdev_nvme_attach_controller" 00:08:38.959 }' 00:08:38.959 [2024-12-05 06:33:34.296060] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:38.959 [2024-12-05 06:33:34.296158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71755 ] 00:08:39.217 [2024-12-05 06:33:34.437047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.217 [2024-12-05 06:33:34.477061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.217 Running I/O for 10 seconds... 00:08:40.151 06:33:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.151 06:33:35 -- common/autotest_common.sh@862 -- # return 0 00:08:40.151 06:33:35 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:40.151 06:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.151 06:33:35 -- common/autotest_common.sh@10 -- # set +x 00:08:40.151 06:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.151 06:33:35 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:40.151 06:33:35 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:40.151 06:33:35 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:40.151 06:33:35 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:40.151 06:33:35 -- target/host_management.sh@52 -- # local ret=1 00:08:40.151 06:33:35 -- target/host_management.sh@53 -- # local i 00:08:40.151 06:33:35 -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:40.151 06:33:35 -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:40.151 06:33:35 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:40.151 06:33:35 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:40.151 06:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.151 06:33:35 -- common/autotest_common.sh@10 -- # set +x 00:08:40.151 06:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.151 06:33:35 -- target/host_management.sh@55 -- # read_io_count=1907 00:08:40.151 06:33:35 -- target/host_management.sh@58 -- # '[' 1907 -ge 100 ']' 00:08:40.151 06:33:35 -- target/host_management.sh@59 -- # ret=0 00:08:40.151 06:33:35 -- target/host_management.sh@60 -- # break 00:08:40.151 06:33:35 -- target/host_management.sh@64 -- # return 0 00:08:40.151 06:33:35 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.151 06:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.151 06:33:35 -- common/autotest_common.sh@10 -- # set +x 00:08:40.151 [2024-12-05 06:33:35.323903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.323947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.323958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.323966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.323973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.323980] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.323988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.323996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.324003] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.324011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.324018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.324025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.324033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.324040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.324047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.324054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.324062] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a934f0 is same with the state(5) to be set 00:08:40.151 [2024-12-05 06:33:35.324142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.151 [2024-12-05 06:33:35.324901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.151 [2024-12-05 06:33:35.324910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.324920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.324928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.324938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.324946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.324956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.324964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.324974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.324982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.324992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.152 [2024-12-05 06:33:35.325549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.152 [2024-12-05 06:33:35.325559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142f120 is same with the state(5) to be set 00:08:40.152 [2024-12-05 06:33:35.325620] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x142f120 was disconnected and freed. reset controller. 00:08:40.152 06:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.152 06:33:35 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.152 [2024-12-05 06:33:35.326820] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:40.152 06:33:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.152 06:33:35 -- common/autotest_common.sh@10 -- # set +x 00:08:40.152 task offset: 1024 on job bdev=Nvme0n1 fails 00:08:40.152 00:08:40.152 Latency(us) 00:08:40.152 [2024-12-05T06:33:35.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.152 [2024-12-05T06:33:35.618Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:40.152 [2024-12-05T06:33:35.618Z] Job: Nvme0n1 ended in about 0.71 seconds with error 00:08:40.152 Verification LBA range: start 0x0 length 0x400 00:08:40.152 Nvme0n1 : 0.71 2853.72 178.36 89.66 0.00 21379.57 5481.19 31218.97 00:08:40.152 [2024-12-05T06:33:35.618Z] =================================================================================================================== 00:08:40.152 [2024-12-05T06:33:35.618Z] Total : 2853.72 178.36 89.66 0.00 21379.57 5481.19 31218.97 00:08:40.152 [2024-12-05 06:33:35.329103] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.152 [2024-12-05 06:33:35.329235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14316a0 (9): Bad file descriptor 00:08:40.152 06:33:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.152 06:33:35 -- target/host_management.sh@87 -- # sleep 1 00:08:40.152 [2024-12-05 06:33:35.339012] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:41.083 06:33:36 -- target/host_management.sh@91 -- # kill -9 71755 00:08:41.083 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (71755) - No such process 00:08:41.083 06:33:36 -- target/host_management.sh@91 -- # true 00:08:41.083 06:33:36 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:41.083 06:33:36 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:41.083 06:33:36 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:41.083 06:33:36 -- nvmf/common.sh@520 -- # config=() 00:08:41.083 06:33:36 -- nvmf/common.sh@520 -- # local subsystem config 00:08:41.083 06:33:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:41.083 06:33:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:41.083 { 00:08:41.083 "params": { 00:08:41.083 "name": "Nvme$subsystem", 00:08:41.083 "trtype": "$TEST_TRANSPORT", 00:08:41.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.083 "adrfam": "ipv4", 00:08:41.083 "trsvcid": "$NVMF_PORT", 00:08:41.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.083 "hdgst": ${hdgst:-false}, 00:08:41.083 "ddgst": ${ddgst:-false} 00:08:41.083 }, 00:08:41.083 "method": "bdev_nvme_attach_controller" 00:08:41.083 } 00:08:41.083 EOF 00:08:41.083 )") 00:08:41.083 06:33:36 -- nvmf/common.sh@542 -- # cat 00:08:41.083 06:33:36 -- nvmf/common.sh@544 -- # jq . 00:08:41.083 06:33:36 -- nvmf/common.sh@545 -- # IFS=, 00:08:41.083 06:33:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:41.083 "params": { 00:08:41.083 "name": "Nvme0", 00:08:41.083 "trtype": "tcp", 00:08:41.083 "traddr": "10.0.0.2", 00:08:41.083 "adrfam": "ipv4", 00:08:41.083 "trsvcid": "4420", 00:08:41.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:41.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:41.083 "hdgst": false, 00:08:41.083 "ddgst": false 00:08:41.083 }, 00:08:41.083 "method": "bdev_nvme_attach_controller" 00:08:41.083 }' 00:08:41.083 [2024-12-05 06:33:36.393802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:41.083 [2024-12-05 06:33:36.393890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71793 ] 00:08:41.083 [2024-12-05 06:33:36.532863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.340 [2024-12-05 06:33:36.566218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.340 Running I/O for 1 seconds... 00:08:42.273 00:08:42.273 Latency(us) 00:08:42.273 [2024-12-05T06:33:37.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.273 [2024-12-05T06:33:37.739Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:42.273 Verification LBA range: start 0x0 length 0x400 00:08:42.273 Nvme0n1 : 1.01 3054.20 190.89 0.00 0.00 20632.68 1295.83 24903.68 00:08:42.273 [2024-12-05T06:33:37.739Z] =================================================================================================================== 00:08:42.273 [2024-12-05T06:33:37.739Z] Total : 3054.20 190.89 0.00 0.00 20632.68 1295.83 24903.68 00:08:42.532 06:33:37 -- target/host_management.sh@101 -- # stoptarget 00:08:42.532 06:33:37 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:42.532 06:33:37 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:42.532 06:33:37 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:42.532 06:33:37 -- target/host_management.sh@40 -- # nvmftestfini 00:08:42.532 06:33:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:42.532 06:33:37 -- nvmf/common.sh@116 -- # sync 00:08:42.532 06:33:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:42.532 06:33:37 -- nvmf/common.sh@119 -- # set +e 00:08:42.532 06:33:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:42.532 06:33:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:42.532 rmmod nvme_tcp 00:08:42.532 rmmod nvme_fabrics 00:08:42.532 rmmod nvme_keyring 00:08:42.532 06:33:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:42.532 06:33:37 -- nvmf/common.sh@123 -- # set -e 00:08:42.532 06:33:37 -- nvmf/common.sh@124 -- # return 0 00:08:42.532 06:33:37 -- nvmf/common.sh@477 -- # '[' -n 71701 ']' 00:08:42.532 06:33:37 -- nvmf/common.sh@478 -- # killprocess 71701 00:08:42.532 06:33:37 -- common/autotest_common.sh@936 -- # '[' -z 71701 ']' 00:08:42.532 06:33:37 -- common/autotest_common.sh@940 -- # kill -0 71701 00:08:42.532 06:33:37 -- common/autotest_common.sh@941 -- # uname 00:08:42.791 06:33:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:42.791 06:33:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71701 00:08:42.791 killing process with pid 71701 00:08:42.791 06:33:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:42.791 06:33:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:42.791 06:33:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71701' 00:08:42.791 06:33:38 -- common/autotest_common.sh@955 -- # kill 71701 00:08:42.791 06:33:38 -- common/autotest_common.sh@960 -- # wait 71701 00:08:42.791 [2024-12-05 06:33:38.159847] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:42.791 06:33:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:42.791 06:33:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:42.791 06:33:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:42.791 06:33:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.791 06:33:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:42.791 06:33:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.791 06:33:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.791 06:33:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.791 06:33:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:42.791 00:08:42.791 real 0m5.189s 00:08:42.791 user 0m21.993s 00:08:42.791 sys 0m1.185s 00:08:42.791 06:33:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.791 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:42.791 ************************************ 00:08:42.791 END TEST nvmf_host_management 00:08:42.791 ************************************ 00:08:43.050 06:33:38 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:08:43.050 00:08:43.050 real 0m5.892s 00:08:43.050 user 0m22.186s 00:08:43.050 sys 0m1.464s 00:08:43.050 06:33:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.050 ************************************ 00:08:43.050 END TEST nvmf_host_management 00:08:43.050 ************************************ 00:08:43.050 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:43.050 06:33:38 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:43.050 06:33:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:43.050 06:33:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.050 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:43.050 ************************************ 00:08:43.050 START TEST nvmf_lvol 00:08:43.050 ************************************ 00:08:43.050 06:33:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:43.050 * Looking for test storage... 00:08:43.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:43.050 06:33:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:43.050 06:33:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:43.050 06:33:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:43.050 06:33:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:43.050 06:33:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:43.050 06:33:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:43.050 06:33:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:43.050 06:33:38 -- scripts/common.sh@335 -- # IFS=.-: 00:08:43.050 06:33:38 -- scripts/common.sh@335 -- # read -ra ver1 00:08:43.050 06:33:38 -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.050 06:33:38 -- scripts/common.sh@336 -- # read -ra ver2 00:08:43.050 06:33:38 -- scripts/common.sh@337 -- # local 'op=<' 00:08:43.050 06:33:38 -- scripts/common.sh@339 -- # ver1_l=2 00:08:43.050 06:33:38 -- scripts/common.sh@340 -- # ver2_l=1 00:08:43.050 06:33:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:43.050 06:33:38 -- scripts/common.sh@343 -- # case "$op" in 00:08:43.050 06:33:38 -- scripts/common.sh@344 -- # : 1 00:08:43.050 06:33:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:43.050 06:33:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.050 06:33:38 -- scripts/common.sh@364 -- # decimal 1 00:08:43.050 06:33:38 -- scripts/common.sh@352 -- # local d=1 00:08:43.050 06:33:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.050 06:33:38 -- scripts/common.sh@354 -- # echo 1 00:08:43.050 06:33:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:43.050 06:33:38 -- scripts/common.sh@365 -- # decimal 2 00:08:43.050 06:33:38 -- scripts/common.sh@352 -- # local d=2 00:08:43.050 06:33:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.050 06:33:38 -- scripts/common.sh@354 -- # echo 2 00:08:43.050 06:33:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:43.050 06:33:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:43.050 06:33:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:43.051 06:33:38 -- scripts/common.sh@367 -- # return 0 00:08:43.051 06:33:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.051 06:33:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:43.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.051 --rc genhtml_branch_coverage=1 00:08:43.051 --rc genhtml_function_coverage=1 00:08:43.051 --rc genhtml_legend=1 00:08:43.051 --rc geninfo_all_blocks=1 00:08:43.051 --rc geninfo_unexecuted_blocks=1 00:08:43.051 00:08:43.051 ' 00:08:43.051 06:33:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:43.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.051 --rc genhtml_branch_coverage=1 00:08:43.051 --rc genhtml_function_coverage=1 00:08:43.051 --rc genhtml_legend=1 00:08:43.051 --rc geninfo_all_blocks=1 00:08:43.051 --rc geninfo_unexecuted_blocks=1 00:08:43.051 00:08:43.051 ' 00:08:43.051 06:33:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:43.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.051 --rc genhtml_branch_coverage=1 00:08:43.051 --rc genhtml_function_coverage=1 00:08:43.051 --rc genhtml_legend=1 00:08:43.051 --rc geninfo_all_blocks=1 00:08:43.051 --rc geninfo_unexecuted_blocks=1 00:08:43.051 00:08:43.051 ' 00:08:43.051 06:33:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:43.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.051 --rc genhtml_branch_coverage=1 00:08:43.051 --rc genhtml_function_coverage=1 00:08:43.051 --rc genhtml_legend=1 00:08:43.051 --rc geninfo_all_blocks=1 00:08:43.051 --rc geninfo_unexecuted_blocks=1 00:08:43.051 00:08:43.051 ' 00:08:43.051 06:33:38 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:43.051 06:33:38 -- nvmf/common.sh@7 -- # uname -s 00:08:43.051 06:33:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.051 06:33:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.051 06:33:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.051 06:33:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.051 06:33:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.051 06:33:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.051 06:33:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.051 06:33:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.051 06:33:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.051 06:33:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.051 06:33:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:08:43.051 06:33:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:08:43.051 06:33:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.051 06:33:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.051 06:33:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:43.051 06:33:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.051 06:33:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.051 06:33:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.051 06:33:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.051 06:33:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.051 06:33:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.051 06:33:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.051 06:33:38 -- paths/export.sh@5 -- # export PATH 00:08:43.051 06:33:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.051 06:33:38 -- nvmf/common.sh@46 -- # : 0 00:08:43.051 06:33:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:43.051 06:33:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:43.051 06:33:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:43.051 06:33:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.051 06:33:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.051 06:33:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:43.051 06:33:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:43.051 06:33:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:43.051 06:33:38 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:43.051 06:33:38 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:43.051 06:33:38 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:43.051 06:33:38 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:43.051 06:33:38 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.051 06:33:38 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:43.051 06:33:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:43.051 06:33:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.051 06:33:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:43.051 06:33:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:43.051 06:33:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:43.051 06:33:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.051 06:33:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.051 06:33:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.051 06:33:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:43.051 06:33:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:43.051 06:33:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:43.051 06:33:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:43.051 06:33:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:43.051 06:33:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:43.051 06:33:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.051 06:33:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.051 06:33:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:43.051 06:33:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:43.051 06:33:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:43.051 06:33:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:43.051 06:33:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:43.051 06:33:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.051 06:33:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:43.051 06:33:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:43.051 06:33:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:43.051 06:33:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:43.051 06:33:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:43.310 06:33:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:43.310 Cannot find device "nvmf_tgt_br" 00:08:43.310 06:33:38 -- nvmf/common.sh@154 -- # true 00:08:43.310 06:33:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:43.310 Cannot find device "nvmf_tgt_br2" 00:08:43.310 06:33:38 -- nvmf/common.sh@155 -- # true 00:08:43.310 06:33:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:43.310 06:33:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:43.310 Cannot find device "nvmf_tgt_br" 00:08:43.310 06:33:38 -- nvmf/common.sh@157 -- # true 00:08:43.310 06:33:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:43.310 Cannot find device "nvmf_tgt_br2" 00:08:43.310 06:33:38 -- nvmf/common.sh@158 -- # true 00:08:43.310 06:33:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:43.310 06:33:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:43.310 06:33:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:43.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.310 06:33:38 -- nvmf/common.sh@161 -- # true 00:08:43.310 06:33:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:43.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.310 06:33:38 -- nvmf/common.sh@162 -- # true 00:08:43.310 06:33:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:43.310 06:33:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:43.310 06:33:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:43.310 06:33:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:43.310 06:33:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:43.310 06:33:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:43.310 06:33:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:43.310 06:33:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:43.310 06:33:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:43.310 06:33:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:43.310 06:33:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:43.310 06:33:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:43.310 06:33:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:43.310 06:33:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:43.310 06:33:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:43.310 06:33:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:43.310 06:33:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:43.310 06:33:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:43.310 06:33:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:43.310 06:33:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:43.569 06:33:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:43.569 06:33:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:43.569 06:33:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:43.569 06:33:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:43.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:08:43.569 00:08:43.569 --- 10.0.0.2 ping statistics --- 00:08:43.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.569 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:08:43.569 06:33:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:43.569 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:43.569 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:08:43.569 00:08:43.569 --- 10.0.0.3 ping statistics --- 00:08:43.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.569 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:43.569 06:33:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:43.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:08:43.569 00:08:43.569 --- 10.0.0.1 ping statistics --- 00:08:43.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.569 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:08:43.569 06:33:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.569 06:33:38 -- nvmf/common.sh@421 -- # return 0 00:08:43.569 06:33:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:43.569 06:33:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.569 06:33:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:43.569 06:33:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:43.569 06:33:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.569 06:33:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:43.569 06:33:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:43.569 06:33:38 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:43.569 06:33:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:43.569 06:33:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.569 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:43.569 06:33:38 -- nvmf/common.sh@469 -- # nvmfpid=72020 00:08:43.569 06:33:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:43.569 06:33:38 -- nvmf/common.sh@470 -- # waitforlisten 72020 00:08:43.569 06:33:38 -- common/autotest_common.sh@829 -- # '[' -z 72020 ']' 00:08:43.570 06:33:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.570 06:33:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.570 06:33:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.570 06:33:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.570 06:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:43.570 [2024-12-05 06:33:38.899354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:43.570 [2024-12-05 06:33:38.899443] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.828 [2024-12-05 06:33:39.039905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:43.828 [2024-12-05 06:33:39.079447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:43.828 [2024-12-05 06:33:39.079842] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.828 [2024-12-05 06:33:39.080042] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.828 [2024-12-05 06:33:39.080066] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.828 [2024-12-05 06:33:39.080367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.828 [2024-12-05 06:33:39.080226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.828 [2024-12-05 06:33:39.080360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.408 06:33:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.408 06:33:39 -- common/autotest_common.sh@862 -- # return 0 00:08:44.408 06:33:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:44.408 06:33:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:44.408 06:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:44.675 06:33:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.675 06:33:39 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:44.933 [2024-12-05 06:33:40.147983] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.933 06:33:40 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:45.191 06:33:40 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:45.191 06:33:40 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:45.450 06:33:40 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:45.450 06:33:40 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:45.709 06:33:40 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:45.968 06:33:41 -- target/nvmf_lvol.sh@29 -- # lvs=c058644f-4b62-479e-8a1f-525953596ab4 00:08:45.968 06:33:41 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c058644f-4b62-479e-8a1f-525953596ab4 lvol 20 00:08:46.228 06:33:41 -- target/nvmf_lvol.sh@32 -- # lvol=fb88c1d1-2b3d-42fb-b5ed-db82e1ceaa14 00:08:46.228 06:33:41 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:46.488 06:33:41 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fb88c1d1-2b3d-42fb-b5ed-db82e1ceaa14 00:08:46.747 06:33:41 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:46.748 [2024-12-05 06:33:42.199911] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.007 06:33:42 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:47.268 06:33:42 -- target/nvmf_lvol.sh@42 -- # perf_pid=72101 00:08:47.268 06:33:42 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:47.268 06:33:42 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:48.246 06:33:43 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot fb88c1d1-2b3d-42fb-b5ed-db82e1ceaa14 MY_SNAPSHOT 00:08:48.506 06:33:43 -- target/nvmf_lvol.sh@47 -- # snapshot=e7fbaaf5-5d79-49ce-b1fd-5fe76db470c4 00:08:48.506 06:33:43 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize fb88c1d1-2b3d-42fb-b5ed-db82e1ceaa14 30 00:08:48.765 06:33:44 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone e7fbaaf5-5d79-49ce-b1fd-5fe76db470c4 MY_CLONE 00:08:49.024 06:33:44 -- target/nvmf_lvol.sh@49 -- # clone=def93dba-b3d2-44a6-9fd0-56138e372428 00:08:49.024 06:33:44 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate def93dba-b3d2-44a6-9fd0-56138e372428 00:08:49.593 06:33:44 -- target/nvmf_lvol.sh@53 -- # wait 72101 00:08:57.713 Initializing NVMe Controllers 00:08:57.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:57.713 Controller IO queue size 128, less than required. 00:08:57.713 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:57.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:57.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:57.713 Initialization complete. Launching workers. 00:08:57.713 ======================================================== 00:08:57.713 Latency(us) 00:08:57.713 Device Information : IOPS MiB/s Average min max 00:08:57.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10524.40 41.11 12162.44 2161.77 59814.31 00:08:57.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10465.30 40.88 12240.29 2905.76 61681.38 00:08:57.713 ======================================================== 00:08:57.713 Total : 20989.70 81.99 12201.26 2161.77 61681.38 00:08:57.713 00:08:57.713 06:33:52 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:57.713 06:33:53 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fb88c1d1-2b3d-42fb-b5ed-db82e1ceaa14 00:08:57.972 06:33:53 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c058644f-4b62-479e-8a1f-525953596ab4 00:08:58.232 06:33:53 -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:58.232 06:33:53 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:58.232 06:33:53 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:58.232 06:33:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:58.232 06:33:53 -- nvmf/common.sh@116 -- # sync 00:08:58.232 06:33:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:58.232 06:33:53 -- nvmf/common.sh@119 -- # set +e 00:08:58.232 06:33:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:58.232 06:33:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:58.232 rmmod nvme_tcp 00:08:58.232 rmmod nvme_fabrics 00:08:58.232 rmmod nvme_keyring 00:08:58.232 06:33:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:58.232 06:33:53 -- nvmf/common.sh@123 -- # set -e 00:08:58.232 06:33:53 -- nvmf/common.sh@124 -- # return 0 00:08:58.232 06:33:53 -- nvmf/common.sh@477 -- # '[' -n 72020 ']' 00:08:58.232 06:33:53 -- nvmf/common.sh@478 -- # killprocess 72020 00:08:58.232 06:33:53 -- common/autotest_common.sh@936 -- # '[' -z 72020 ']' 00:08:58.232 06:33:53 -- common/autotest_common.sh@940 -- # kill -0 72020 00:08:58.232 06:33:53 -- common/autotest_common.sh@941 -- # uname 00:08:58.232 06:33:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:58.232 06:33:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72020 00:08:58.232 killing process with pid 72020 00:08:58.232 06:33:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:58.232 06:33:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:58.232 06:33:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72020' 00:08:58.232 06:33:53 -- common/autotest_common.sh@955 -- # kill 72020 00:08:58.232 06:33:53 -- common/autotest_common.sh@960 -- # wait 72020 00:08:58.491 06:33:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:58.491 06:33:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:58.491 06:33:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:58.491 06:33:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.491 06:33:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:58.491 06:33:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.491 06:33:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.491 06:33:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.491 06:33:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:58.491 00:08:58.491 real 0m15.518s 00:08:58.491 user 1m4.643s 00:08:58.491 sys 0m4.434s 00:08:58.491 06:33:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:58.491 06:33:53 -- common/autotest_common.sh@10 -- # set +x 00:08:58.491 ************************************ 00:08:58.491 END TEST nvmf_lvol 00:08:58.491 ************************************ 00:08:58.491 06:33:53 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:58.491 06:33:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:58.491 06:33:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.491 06:33:53 -- common/autotest_common.sh@10 -- # set +x 00:08:58.491 ************************************ 00:08:58.491 START TEST nvmf_lvs_grow 00:08:58.491 ************************************ 00:08:58.491 06:33:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:58.491 * Looking for test storage... 00:08:58.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:58.491 06:33:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:58.491 06:33:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:58.491 06:33:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:58.751 06:33:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:58.751 06:33:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:58.751 06:33:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:58.751 06:33:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:58.751 06:33:54 -- scripts/common.sh@335 -- # IFS=.-: 00:08:58.751 06:33:54 -- scripts/common.sh@335 -- # read -ra ver1 00:08:58.751 06:33:54 -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.751 06:33:54 -- scripts/common.sh@336 -- # read -ra ver2 00:08:58.751 06:33:54 -- scripts/common.sh@337 -- # local 'op=<' 00:08:58.751 06:33:54 -- scripts/common.sh@339 -- # ver1_l=2 00:08:58.751 06:33:54 -- scripts/common.sh@340 -- # ver2_l=1 00:08:58.751 06:33:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:58.751 06:33:54 -- scripts/common.sh@343 -- # case "$op" in 00:08:58.751 06:33:54 -- scripts/common.sh@344 -- # : 1 00:08:58.751 06:33:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:58.751 06:33:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.751 06:33:54 -- scripts/common.sh@364 -- # decimal 1 00:08:58.751 06:33:54 -- scripts/common.sh@352 -- # local d=1 00:08:58.751 06:33:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.751 06:33:54 -- scripts/common.sh@354 -- # echo 1 00:08:58.751 06:33:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:58.751 06:33:54 -- scripts/common.sh@365 -- # decimal 2 00:08:58.751 06:33:54 -- scripts/common.sh@352 -- # local d=2 00:08:58.751 06:33:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.751 06:33:54 -- scripts/common.sh@354 -- # echo 2 00:08:58.751 06:33:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:58.751 06:33:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:58.751 06:33:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:58.751 06:33:54 -- scripts/common.sh@367 -- # return 0 00:08:58.751 06:33:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.751 06:33:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:58.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.751 --rc genhtml_branch_coverage=1 00:08:58.751 --rc genhtml_function_coverage=1 00:08:58.751 --rc genhtml_legend=1 00:08:58.751 --rc geninfo_all_blocks=1 00:08:58.751 --rc geninfo_unexecuted_blocks=1 00:08:58.751 00:08:58.751 ' 00:08:58.751 06:33:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:58.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.751 --rc genhtml_branch_coverage=1 00:08:58.751 --rc genhtml_function_coverage=1 00:08:58.751 --rc genhtml_legend=1 00:08:58.751 --rc geninfo_all_blocks=1 00:08:58.751 --rc geninfo_unexecuted_blocks=1 00:08:58.751 00:08:58.751 ' 00:08:58.751 06:33:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:58.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.751 --rc genhtml_branch_coverage=1 00:08:58.751 --rc genhtml_function_coverage=1 00:08:58.751 --rc genhtml_legend=1 00:08:58.751 --rc geninfo_all_blocks=1 00:08:58.751 --rc geninfo_unexecuted_blocks=1 00:08:58.751 00:08:58.751 ' 00:08:58.751 06:33:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:58.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.751 --rc genhtml_branch_coverage=1 00:08:58.751 --rc genhtml_function_coverage=1 00:08:58.751 --rc genhtml_legend=1 00:08:58.751 --rc geninfo_all_blocks=1 00:08:58.751 --rc geninfo_unexecuted_blocks=1 00:08:58.751 00:08:58.751 ' 00:08:58.751 06:33:54 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.751 06:33:54 -- nvmf/common.sh@7 -- # uname -s 00:08:58.751 06:33:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.751 06:33:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.751 06:33:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.751 06:33:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.751 06:33:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.751 06:33:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.751 06:33:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.751 06:33:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.751 06:33:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.751 06:33:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.752 06:33:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:08:58.752 06:33:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:08:58.752 06:33:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.752 06:33:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.752 06:33:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:58.752 06:33:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.752 06:33:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.752 06:33:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.752 06:33:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.752 06:33:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.752 06:33:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.752 06:33:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.752 06:33:54 -- paths/export.sh@5 -- # export PATH 00:08:58.752 06:33:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.752 06:33:54 -- nvmf/common.sh@46 -- # : 0 00:08:58.752 06:33:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:58.752 06:33:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:58.752 06:33:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:58.752 06:33:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.752 06:33:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.752 06:33:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:58.752 06:33:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:58.752 06:33:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:58.752 06:33:54 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.752 06:33:54 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:58.752 06:33:54 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:08:58.752 06:33:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:58.752 06:33:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.752 06:33:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:58.752 06:33:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:58.752 06:33:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:58.752 06:33:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.752 06:33:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.752 06:33:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.752 06:33:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:58.752 06:33:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:58.752 06:33:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:58.752 06:33:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:58.752 06:33:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:58.752 06:33:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:58.752 06:33:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.752 06:33:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.752 06:33:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:58.752 06:33:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:58.752 06:33:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:58.752 06:33:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:58.752 06:33:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:58.752 06:33:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.752 06:33:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:58.752 06:33:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:58.752 06:33:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:58.752 06:33:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:58.752 06:33:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:58.752 06:33:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:58.752 Cannot find device "nvmf_tgt_br" 00:08:58.752 06:33:54 -- nvmf/common.sh@154 -- # true 00:08:58.752 06:33:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.752 Cannot find device "nvmf_tgt_br2" 00:08:58.752 06:33:54 -- nvmf/common.sh@155 -- # true 00:08:58.752 06:33:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:58.752 06:33:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:58.752 Cannot find device "nvmf_tgt_br" 00:08:58.752 06:33:54 -- nvmf/common.sh@157 -- # true 00:08:58.752 06:33:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:58.752 Cannot find device "nvmf_tgt_br2" 00:08:58.752 06:33:54 -- nvmf/common.sh@158 -- # true 00:08:58.752 06:33:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:58.752 06:33:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:58.752 06:33:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:58.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.752 06:33:54 -- nvmf/common.sh@161 -- # true 00:08:58.752 06:33:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:59.011 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:59.011 06:33:54 -- nvmf/common.sh@162 -- # true 00:08:59.011 06:33:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:59.011 06:33:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:59.011 06:33:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:59.011 06:33:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:59.011 06:33:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:59.011 06:33:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:59.011 06:33:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:59.011 06:33:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:59.011 06:33:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:59.011 06:33:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:59.011 06:33:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:59.011 06:33:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:59.011 06:33:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:59.011 06:33:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:59.011 06:33:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:59.011 06:33:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:59.011 06:33:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:59.011 06:33:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:59.011 06:33:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:59.011 06:33:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:59.012 06:33:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:59.012 06:33:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:59.012 06:33:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:59.012 06:33:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:59.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:59.012 00:08:59.012 --- 10.0.0.2 ping statistics --- 00:08:59.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.012 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:59.012 06:33:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:59.012 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:59.012 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:08:59.012 00:08:59.012 --- 10.0.0.3 ping statistics --- 00:08:59.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.012 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:59.012 06:33:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:59.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:59.012 00:08:59.012 --- 10.0.0.1 ping statistics --- 00:08:59.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.012 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:59.012 06:33:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.012 06:33:54 -- nvmf/common.sh@421 -- # return 0 00:08:59.012 06:33:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:59.012 06:33:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.012 06:33:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:59.012 06:33:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:59.012 06:33:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.012 06:33:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:59.012 06:33:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:59.012 06:33:54 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:08:59.012 06:33:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:59.012 06:33:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:59.012 06:33:54 -- common/autotest_common.sh@10 -- # set +x 00:08:59.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.012 06:33:54 -- nvmf/common.sh@469 -- # nvmfpid=72431 00:08:59.012 06:33:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:59.012 06:33:54 -- nvmf/common.sh@470 -- # waitforlisten 72431 00:08:59.012 06:33:54 -- common/autotest_common.sh@829 -- # '[' -z 72431 ']' 00:08:59.012 06:33:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.012 06:33:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:59.012 06:33:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.012 06:33:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:59.012 06:33:54 -- common/autotest_common.sh@10 -- # set +x 00:08:59.012 [2024-12-05 06:33:54.463976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:59.012 [2024-12-05 06:33:54.464090] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.271 [2024-12-05 06:33:54.604847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.271 [2024-12-05 06:33:54.639328] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:59.271 [2024-12-05 06:33:54.639508] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.271 [2024-12-05 06:33:54.639522] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.271 [2024-12-05 06:33:54.639545] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.271 [2024-12-05 06:33:54.639579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.217 06:33:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:00.217 06:33:55 -- common/autotest_common.sh@862 -- # return 0 00:09:00.217 06:33:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:00.217 06:33:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:00.217 06:33:55 -- common/autotest_common.sh@10 -- # set +x 00:09:00.217 06:33:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.217 06:33:55 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:00.476 [2024-12-05 06:33:55.733070] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.476 06:33:55 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:09:00.476 06:33:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:00.476 06:33:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.476 06:33:55 -- common/autotest_common.sh@10 -- # set +x 00:09:00.476 ************************************ 00:09:00.476 START TEST lvs_grow_clean 00:09:00.476 ************************************ 00:09:00.476 06:33:55 -- common/autotest_common.sh@1114 -- # lvs_grow 00:09:00.476 06:33:55 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:00.476 06:33:55 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:00.476 06:33:55 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:00.476 06:33:55 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:00.476 06:33:55 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:00.476 06:33:55 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:00.476 06:33:55 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:00.476 06:33:55 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:00.476 06:33:55 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:00.735 06:33:56 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:00.735 06:33:56 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:00.995 06:33:56 -- target/nvmf_lvs_grow.sh@28 -- # lvs=da8834bf-f79a-41ee-88a8-4745b90d03c1 00:09:00.995 06:33:56 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:00.995 06:33:56 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8834bf-f79a-41ee-88a8-4745b90d03c1 00:09:01.254 06:33:56 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:01.254 06:33:56 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:01.254 06:33:56 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u da8834bf-f79a-41ee-88a8-4745b90d03c1 lvol 150 00:09:01.512 06:33:56 -- target/nvmf_lvs_grow.sh@33 -- # lvol=5b870da5-8e3b-4800-aa0b-f911f0236009 00:09:01.512 06:33:56 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:01.512 06:33:56 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:01.770 [2024-12-05 06:33:56.999363] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:01.770 [2024-12-05 06:33:56.999748] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:01.770 true 00:09:01.770 06:33:57 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:01.770 06:33:57 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8834bf-f79a-41ee-88a8-4745b90d03c1 00:09:02.029 06:33:57 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:02.029 06:33:57 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:02.288 06:33:57 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5b870da5-8e3b-4800-aa0b-f911f0236009 00:09:02.546 06:33:57 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:02.806 [2024-12-05 06:33:58.020085] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.806 06:33:58 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:03.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:03.066 06:33:58 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72513 00:09:03.066 06:33:58 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:03.066 06:33:58 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.066 06:33:58 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72513 /var/tmp/bdevperf.sock 00:09:03.066 06:33:58 -- common/autotest_common.sh@829 -- # '[' -z 72513 ']' 00:09:03.066 06:33:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:03.066 06:33:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.066 06:33:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:03.066 06:33:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.066 06:33:58 -- common/autotest_common.sh@10 -- # set +x 00:09:03.066 [2024-12-05 06:33:58.317805] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:03.066 [2024-12-05 06:33:58.318123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72513 ] 00:09:03.066 [2024-12-05 06:33:58.453171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.066 [2024-12-05 06:33:58.484715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.004 06:33:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.004 06:33:59 -- common/autotest_common.sh@862 -- # return 0 00:09:04.004 06:33:59 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:04.262 Nvme0n1 00:09:04.263 06:33:59 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:04.521 [ 00:09:04.521 { 00:09:04.521 "name": "Nvme0n1", 00:09:04.521 "aliases": [ 00:09:04.521 "5b870da5-8e3b-4800-aa0b-f911f0236009" 00:09:04.521 ], 00:09:04.521 "product_name": "NVMe disk", 00:09:04.521 "block_size": 4096, 00:09:04.521 "num_blocks": 38912, 00:09:04.521 "uuid": "5b870da5-8e3b-4800-aa0b-f911f0236009", 00:09:04.521 "assigned_rate_limits": { 00:09:04.521 "rw_ios_per_sec": 0, 00:09:04.521 "rw_mbytes_per_sec": 0, 00:09:04.521 "r_mbytes_per_sec": 0, 00:09:04.521 "w_mbytes_per_sec": 0 00:09:04.521 }, 00:09:04.521 "claimed": false, 00:09:04.521 "zoned": false, 00:09:04.521 "supported_io_types": { 00:09:04.521 "read": true, 00:09:04.521 "write": true, 00:09:04.521 "unmap": true, 00:09:04.521 "write_zeroes": true, 00:09:04.521 "flush": true, 00:09:04.521 "reset": true, 00:09:04.521 "compare": true, 00:09:04.521 "compare_and_write": true, 00:09:04.521 "abort": true, 00:09:04.521 "nvme_admin": true, 00:09:04.521 "nvme_io": true 00:09:04.521 }, 00:09:04.521 "driver_specific": { 00:09:04.521 "nvme": [ 00:09:04.521 { 00:09:04.521 "trid": { 00:09:04.521 "trtype": "TCP", 00:09:04.521 "adrfam": "IPv4", 00:09:04.521 "traddr": "10.0.0.2", 00:09:04.521 "trsvcid": "4420", 00:09:04.521 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:04.521 }, 00:09:04.521 "ctrlr_data": { 00:09:04.521 "cntlid": 1, 00:09:04.521 "vendor_id": "0x8086", 00:09:04.521 "model_number": "SPDK bdev Controller", 00:09:04.521 "serial_number": "SPDK0", 00:09:04.521 "firmware_revision": "24.01.1", 00:09:04.521 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:04.521 "oacs": { 00:09:04.521 "security": 0, 00:09:04.521 "format": 0, 00:09:04.521 "firmware": 0, 00:09:04.521 "ns_manage": 0 00:09:04.521 }, 00:09:04.521 "multi_ctrlr": true, 00:09:04.521 "ana_reporting": false 00:09:04.521 }, 00:09:04.521 "vs": { 00:09:04.521 "nvme_version": "1.3" 00:09:04.521 }, 00:09:04.521 "ns_data": { 00:09:04.521 "id": 1, 00:09:04.521 "can_share": true 00:09:04.521 } 00:09:04.521 } 00:09:04.521 ], 00:09:04.521 "mp_policy": "active_passive" 00:09:04.521 } 00:09:04.521 } 00:09:04.521 ] 00:09:04.521 06:33:59 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72537 00:09:04.521 06:33:59 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:04.521 06:33:59 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:04.780 Running I/O for 10 seconds... 00:09:05.716 Latency(us) 00:09:05.716 [2024-12-05T06:34:01.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.716 [2024-12-05T06:34:01.182Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.716 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:05.716 [2024-12-05T06:34:01.182Z] =================================================================================================================== 00:09:05.716 [2024-12-05T06:34:01.182Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:05.716 00:09:06.654 06:34:01 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u da8834bf-f79a-41ee-88a8-4745b90d03c1 00:09:06.654 [2024-12-05T06:34:02.120Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.654 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:06.654 [2024-12-05T06:34:02.120Z] =================================================================================================================== 00:09:06.654 [2024-12-05T06:34:02.120Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:06.654 00:09:06.913 true 00:09:06.913 06:34:02 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8834bf-f79a-41ee-88a8-4745b90d03c1 00:09:06.913 06:34:02 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:07.172 06:34:02 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:07.172 06:34:02 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:07.172 06:34:02 -- target/nvmf_lvs_grow.sh@65 -- # wait 72537 00:09:07.741 [2024-12-05T06:34:03.207Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.741 Nvme0n1 : 3.00 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:09:07.741 [2024-12-05T06:34:03.207Z] =================================================================================================================== 00:09:07.741 [2024-12-05T06:34:03.207Z] Total : 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:09:07.741 00:09:08.678 [2024-12-05T06:34:04.144Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.678 Nvme0n1 : 4.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:08.678 [2024-12-05T06:34:04.144Z] =================================================================================================================== 00:09:08.678 [2024-12-05T06:34:04.144Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:08.678 00:09:09.616 [2024-12-05T06:34:05.082Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.616 Nvme0n1 : 5.00 6756.40 26.39 0.00 0.00 0.00 0.00 0.00 00:09:09.616 [2024-12-05T06:34:05.082Z] =================================================================================================================== 00:09:09.616 [2024-12-05T06:34:05.082Z] Total : 6756.40 26.39 0.00 0.00 0.00 0.00 0.00 00:09:09.616 00:09:10.552 [2024-12-05T06:34:06.018Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.552 Nvme0n1 : 6.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:10.552 [2024-12-05T06:34:06.018Z] =================================================================================================================== 00:09:10.552 [2024-12-05T06:34:06.018Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:10.552 00:09:11.556 [2024-12-05T06:34:07.022Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.556 Nvme0n1 : 7.00 6712.86 26.22 0.00 0.00 0.00 0.00 0.00 00:09:11.556 [2024-12-05T06:34:07.022Z] =================================================================================================================== 00:09:11.556 [2024-12-05T06:34:07.022Z] Total : 6712.86 26.22 0.00 0.00 0.00 0.00 0.00 00:09:11.556 00:09:12.930 [2024-12-05T06:34:08.396Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.930 Nvme0n1 : 8.00 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:09:12.930 [2024-12-05T06:34:08.396Z] =================================================================================================================== 00:09:12.930 [2024-12-05T06:34:08.396Z] Total : 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:09:12.930 00:09:13.867 [2024-12-05T06:34:09.333Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.867 Nvme0n1 : 9.00 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:09:13.867 [2024-12-05T06:34:09.333Z] =================================================================================================================== 00:09:13.867 [2024-12-05T06:34:09.333Z] Total : 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:09:13.867 00:09:14.805 [2024-12-05T06:34:10.271Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.805 Nvme0n1 : 10.00 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:14.805 [2024-12-05T06:34:10.271Z] =================================================================================================================== 00:09:14.805 [2024-12-05T06:34:10.271Z] Total : 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:14.805 00:09:14.805 00:09:14.805 Latency(us) 00:09:14.805 [2024-12-05T06:34:10.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.805 [2024-12-05T06:34:10.271Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.805 Nvme0n1 : 10.00 6677.59 26.08 0.00 0.00 19163.35 16681.89 40751.48 00:09:14.805 [2024-12-05T06:34:10.271Z] =================================================================================================================== 00:09:14.805 [2024-12-05T06:34:10.271Z] Total : 6677.59 26.08 0.00 0.00 19163.35 16681.89 40751.48 00:09:14.805 0 00:09:14.805 06:34:10 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72513 00:09:14.805 06:34:10 -- common/autotest_common.sh@936 -- # '[' -z 72513 ']' 00:09:14.805 06:34:10 -- common/autotest_common.sh@940 -- # kill -0 72513 00:09:14.805 06:34:10 -- common/autotest_common.sh@941 -- # uname 00:09:14.805 06:34:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:14.805 06:34:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72513 00:09:14.805 killing process with pid 72513 00:09:14.805 Received shutdown signal, test time was about 10.000000 seconds 00:09:14.805 00:09:14.805 Latency(us) 00:09:14.805 [2024-12-05T06:34:10.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.805 [2024-12-05T06:34:10.271Z] =================================================================================================================== 00:09:14.805 [2024-12-05T06:34:10.271Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:14.805 06:34:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:14.805 06:34:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:14.805 06:34:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72513' 00:09:14.805 06:34:10 -- common/autotest_common.sh@955 -- # kill 72513 00:09:14.805 06:34:10 -- common/autotest_common.sh@960 -- # wait 72513 00:09:14.805 06:34:10 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:15.063 06:34:10 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8834bf-f79a-41ee-88a8-4745b90d03c1 00:09:15.063 06:34:10 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:15.629 06:34:10 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:15.629 06:34:10 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:09:15.629 06:34:10 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:15.629 [2024-12-05 06:34:11.060268] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:15.888 06:34:11 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8834bf-f79a-41ee-88a8-4745b90d03c1 00:09:15.888 06:34:11 -- common/autotest_common.sh@650 -- # local es=0 00:09:15.888 06:34:11 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8834bf-f79a-41ee-88a8-4745b90d03c1 00:09:15.888 06:34:11 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.888 06:34:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.888 06:34:11 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.888 06:34:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.888 06:34:11 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.888 06:34:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.888 06:34:11 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.888 06:34:11 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:15.888 06:34:11 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8834bf-f79a-41ee-88a8-4745b90d03c1 00:09:16.148 request: 00:09:16.148 { 00:09:16.148 "uuid": "da8834bf-f79a-41ee-88a8-4745b90d03c1", 00:09:16.148 "method": "bdev_lvol_get_lvstores", 00:09:16.148 "req_id": 1 00:09:16.148 } 00:09:16.148 Got JSON-RPC error response 00:09:16.148 response: 00:09:16.148 { 00:09:16.148 "code": -19, 00:09:16.148 "message": "No such device" 00:09:16.148 } 00:09:16.148 06:34:11 -- common/autotest_common.sh@653 -- # es=1 00:09:16.148 06:34:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:16.148 06:34:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:16.148 06:34:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:16.148 06:34:11 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:16.407 aio_bdev 00:09:16.407 06:34:11 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 5b870da5-8e3b-4800-aa0b-f911f0236009 00:09:16.407 06:34:11 -- common/autotest_common.sh@897 -- # local bdev_name=5b870da5-8e3b-4800-aa0b-f911f0236009 00:09:16.407 06:34:11 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:16.407 06:34:11 -- common/autotest_common.sh@899 -- # local i 00:09:16.407 06:34:11 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:16.407 06:34:11 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:16.407 06:34:11 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:16.666 06:34:11 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5b870da5-8e3b-4800-aa0b-f911f0236009 -t 2000 00:09:16.925 [ 00:09:16.925 { 00:09:16.925 "name": "5b870da5-8e3b-4800-aa0b-f911f0236009", 00:09:16.925 "aliases": [ 00:09:16.925 "lvs/lvol" 00:09:16.925 ], 00:09:16.925 "product_name": "Logical Volume", 00:09:16.925 "block_size": 4096, 00:09:16.925 "num_blocks": 38912, 00:09:16.925 "uuid": "5b870da5-8e3b-4800-aa0b-f911f0236009", 00:09:16.925 "assigned_rate_limits": { 00:09:16.925 "rw_ios_per_sec": 0, 00:09:16.925 "rw_mbytes_per_sec": 0, 00:09:16.925 "r_mbytes_per_sec": 0, 00:09:16.925 "w_mbytes_per_sec": 0 00:09:16.925 }, 00:09:16.925 "claimed": false, 00:09:16.925 "zoned": false, 00:09:16.925 "supported_io_types": { 00:09:16.925 "read": true, 00:09:16.925 "write": true, 00:09:16.925 "unmap": true, 00:09:16.925 "write_zeroes": true, 00:09:16.925 "flush": false, 00:09:16.925 "reset": true, 00:09:16.926 "compare": false, 00:09:16.926 "compare_and_write": false, 00:09:16.926 "abort": false, 00:09:16.926 "nvme_admin": false, 00:09:16.926 "nvme_io": false 00:09:16.926 }, 00:09:16.926 "driver_specific": { 00:09:16.926 "lvol": { 00:09:16.926 "lvol_store_uuid": "da8834bf-f79a-41ee-88a8-4745b90d03c1", 00:09:16.926 "base_bdev": "aio_bdev", 00:09:16.926 "thin_provision": false, 00:09:16.926 "snapshot": false, 00:09:16.926 "clone": false, 00:09:16.926 "esnap_clone": false 00:09:16.926 } 00:09:16.926 } 00:09:16.926 } 00:09:16.926 ] 00:09:16.926 06:34:12 -- common/autotest_common.sh@905 -- # return 0 00:09:16.926 06:34:12 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8834bf-f79a-41ee-88a8-4745b90d03c1 00:09:16.926 06:34:12 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:17.184 06:34:12 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:17.184 06:34:12 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da8834bf-f79a-41ee-88a8-4745b90d03c1 00:09:17.185 06:34:12 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:17.443 06:34:12 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:17.443 06:34:12 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5b870da5-8e3b-4800-aa0b-f911f0236009 00:09:17.701 06:34:13 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da8834bf-f79a-41ee-88a8-4745b90d03c1 00:09:17.959 06:34:13 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:18.216 06:34:13 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:18.781 00:09:18.781 real 0m18.286s 00:09:18.781 user 0m17.509s 00:09:18.782 sys 0m2.386s 00:09:18.782 06:34:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:18.782 06:34:14 -- common/autotest_common.sh@10 -- # set +x 00:09:18.782 ************************************ 00:09:18.782 END TEST lvs_grow_clean 00:09:18.782 ************************************ 00:09:18.782 06:34:14 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:18.782 06:34:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:18.782 06:34:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:18.782 06:34:14 -- common/autotest_common.sh@10 -- # set +x 00:09:18.782 ************************************ 00:09:18.782 START TEST lvs_grow_dirty 00:09:18.782 ************************************ 00:09:18.782 06:34:14 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:09:18.782 06:34:14 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:18.782 06:34:14 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:18.782 06:34:14 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:18.782 06:34:14 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:18.782 06:34:14 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:18.782 06:34:14 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:18.782 06:34:14 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:18.782 06:34:14 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:18.782 06:34:14 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:19.040 06:34:14 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:19.040 06:34:14 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:19.299 06:34:14 -- target/nvmf_lvs_grow.sh@28 -- # lvs=1102046a-c58e-4def-97b1-c55ddb6d02df 00:09:19.299 06:34:14 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1102046a-c58e-4def-97b1-c55ddb6d02df 00:09:19.299 06:34:14 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:19.867 06:34:15 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:19.867 06:34:15 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:19.867 06:34:15 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1102046a-c58e-4def-97b1-c55ddb6d02df lvol 150 00:09:19.867 06:34:15 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b20d5eb9-dc64-4b77-8bf9-5f08060ae6eb 00:09:19.867 06:34:15 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:19.867 06:34:15 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:20.126 [2024-12-05 06:34:15.549746] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:20.126 [2024-12-05 06:34:15.549836] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:20.126 true 00:09:20.126 06:34:15 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1102046a-c58e-4def-97b1-c55ddb6d02df 00:09:20.126 06:34:15 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:20.693 06:34:15 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:20.693 06:34:15 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:20.951 06:34:16 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b20d5eb9-dc64-4b77-8bf9-5f08060ae6eb 00:09:21.210 06:34:16 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:21.470 06:34:16 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:21.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:21.729 06:34:16 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72788 00:09:21.729 06:34:16 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:21.729 06:34:16 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:21.729 06:34:16 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72788 /var/tmp/bdevperf.sock 00:09:21.729 06:34:16 -- common/autotest_common.sh@829 -- # '[' -z 72788 ']' 00:09:21.729 06:34:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:21.729 06:34:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.729 06:34:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:21.729 06:34:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.729 06:34:17 -- common/autotest_common.sh@10 -- # set +x 00:09:21.729 [2024-12-05 06:34:17.043620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:21.729 [2024-12-05 06:34:17.043871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72788 ] 00:09:21.729 [2024-12-05 06:34:17.182807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.986 [2024-12-05 06:34:17.226829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.918 06:34:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.918 06:34:18 -- common/autotest_common.sh@862 -- # return 0 00:09:22.918 06:34:18 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:23.176 Nvme0n1 00:09:23.176 06:34:18 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:23.434 [ 00:09:23.434 { 00:09:23.434 "name": "Nvme0n1", 00:09:23.434 "aliases": [ 00:09:23.434 "b20d5eb9-dc64-4b77-8bf9-5f08060ae6eb" 00:09:23.434 ], 00:09:23.434 "product_name": "NVMe disk", 00:09:23.434 "block_size": 4096, 00:09:23.434 "num_blocks": 38912, 00:09:23.434 "uuid": "b20d5eb9-dc64-4b77-8bf9-5f08060ae6eb", 00:09:23.434 "assigned_rate_limits": { 00:09:23.434 "rw_ios_per_sec": 0, 00:09:23.434 "rw_mbytes_per_sec": 0, 00:09:23.434 "r_mbytes_per_sec": 0, 00:09:23.434 "w_mbytes_per_sec": 0 00:09:23.434 }, 00:09:23.434 "claimed": false, 00:09:23.434 "zoned": false, 00:09:23.434 "supported_io_types": { 00:09:23.434 "read": true, 00:09:23.434 "write": true, 00:09:23.434 "unmap": true, 00:09:23.434 "write_zeroes": true, 00:09:23.434 "flush": true, 00:09:23.434 "reset": true, 00:09:23.434 "compare": true, 00:09:23.434 "compare_and_write": true, 00:09:23.434 "abort": true, 00:09:23.434 "nvme_admin": true, 00:09:23.434 "nvme_io": true 00:09:23.434 }, 00:09:23.434 "driver_specific": { 00:09:23.434 "nvme": [ 00:09:23.434 { 00:09:23.434 "trid": { 00:09:23.434 "trtype": "TCP", 00:09:23.434 "adrfam": "IPv4", 00:09:23.434 "traddr": "10.0.0.2", 00:09:23.434 "trsvcid": "4420", 00:09:23.434 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:23.434 }, 00:09:23.434 "ctrlr_data": { 00:09:23.434 "cntlid": 1, 00:09:23.434 "vendor_id": "0x8086", 00:09:23.434 "model_number": "SPDK bdev Controller", 00:09:23.434 "serial_number": "SPDK0", 00:09:23.434 "firmware_revision": "24.01.1", 00:09:23.434 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:23.434 "oacs": { 00:09:23.434 "security": 0, 00:09:23.434 "format": 0, 00:09:23.434 "firmware": 0, 00:09:23.434 "ns_manage": 0 00:09:23.434 }, 00:09:23.434 "multi_ctrlr": true, 00:09:23.434 "ana_reporting": false 00:09:23.434 }, 00:09:23.434 "vs": { 00:09:23.434 "nvme_version": "1.3" 00:09:23.434 }, 00:09:23.434 "ns_data": { 00:09:23.434 "id": 1, 00:09:23.434 "can_share": true 00:09:23.434 } 00:09:23.434 } 00:09:23.434 ], 00:09:23.434 "mp_policy": "active_passive" 00:09:23.434 } 00:09:23.434 } 00:09:23.434 ] 00:09:23.434 06:34:18 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72817 00:09:23.434 06:34:18 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:23.434 06:34:18 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:23.692 Running I/O for 10 seconds... 00:09:24.634 Latency(us) 00:09:24.634 [2024-12-05T06:34:20.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.634 [2024-12-05T06:34:20.100Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.634 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:24.634 [2024-12-05T06:34:20.100Z] =================================================================================================================== 00:09:24.634 [2024-12-05T06:34:20.100Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:24.634 00:09:25.570 06:34:20 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1102046a-c58e-4def-97b1-c55ddb6d02df 00:09:25.570 [2024-12-05T06:34:21.036Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.570 Nvme0n1 : 2.00 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:25.570 [2024-12-05T06:34:21.036Z] =================================================================================================================== 00:09:25.570 [2024-12-05T06:34:21.036Z] Total : 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:25.570 00:09:25.828 true 00:09:25.828 06:34:21 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1102046a-c58e-4def-97b1-c55ddb6d02df 00:09:25.828 06:34:21 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:26.394 06:34:21 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:26.394 06:34:21 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:26.394 06:34:21 -- target/nvmf_lvs_grow.sh@65 -- # wait 72817 00:09:26.651 [2024-12-05T06:34:22.117Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.651 Nvme0n1 : 3.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:26.651 [2024-12-05T06:34:22.117Z] =================================================================================================================== 00:09:26.651 [2024-12-05T06:34:22.117Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:26.651 00:09:27.604 [2024-12-05T06:34:23.070Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.604 Nvme0n1 : 4.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:27.604 [2024-12-05T06:34:23.070Z] =================================================================================================================== 00:09:27.604 [2024-12-05T06:34:23.070Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:27.604 00:09:28.542 [2024-12-05T06:34:24.009Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.543 Nvme0n1 : 5.00 6499.20 25.39 0.00 0.00 0.00 0.00 0.00 00:09:28.543 [2024-12-05T06:34:24.009Z] =================================================================================================================== 00:09:28.543 [2024-12-05T06:34:24.009Z] Total : 6499.20 25.39 0.00 0.00 0.00 0.00 0.00 00:09:28.543 00:09:29.478 [2024-12-05T06:34:24.944Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.478 Nvme0n1 : 6.00 6453.17 25.21 0.00 0.00 0.00 0.00 0.00 00:09:29.478 [2024-12-05T06:34:24.944Z] =================================================================================================================== 00:09:29.478 [2024-12-05T06:34:24.944Z] Total : 6453.17 25.21 0.00 0.00 0.00 0.00 0.00 00:09:29.478 00:09:30.859 [2024-12-05T06:34:26.325Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.859 Nvme0n1 : 7.00 6423.86 25.09 0.00 0.00 0.00 0.00 0.00 00:09:30.859 [2024-12-05T06:34:26.325Z] =================================================================================================================== 00:09:30.859 [2024-12-05T06:34:26.325Z] Total : 6423.86 25.09 0.00 0.00 0.00 0.00 0.00 00:09:30.859 00:09:31.797 [2024-12-05T06:34:27.263Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.797 Nvme0n1 : 8.00 6430.50 25.12 0.00 0.00 0.00 0.00 0.00 00:09:31.797 [2024-12-05T06:34:27.263Z] =================================================================================================================== 00:09:31.797 [2024-12-05T06:34:27.263Z] Total : 6430.50 25.12 0.00 0.00 0.00 0.00 0.00 00:09:31.797 00:09:32.735 [2024-12-05T06:34:28.201Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.735 Nvme0n1 : 9.00 6407.44 25.03 0.00 0.00 0.00 0.00 0.00 00:09:32.735 [2024-12-05T06:34:28.201Z] =================================================================================================================== 00:09:32.735 [2024-12-05T06:34:28.201Z] Total : 6407.44 25.03 0.00 0.00 0.00 0.00 0.00 00:09:32.735 00:09:33.672 [2024-12-05T06:34:29.138Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.672 Nvme0n1 : 10.00 6414.40 25.06 0.00 0.00 0.00 0.00 0.00 00:09:33.672 [2024-12-05T06:34:29.138Z] =================================================================================================================== 00:09:33.672 [2024-12-05T06:34:29.138Z] Total : 6414.40 25.06 0.00 0.00 0.00 0.00 0.00 00:09:33.672 00:09:33.672 00:09:33.672 Latency(us) 00:09:33.672 [2024-12-05T06:34:29.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.673 [2024-12-05T06:34:29.139Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.673 Nvme0n1 : 10.01 6421.11 25.08 0.00 0.00 19927.42 7060.01 81502.95 00:09:33.673 [2024-12-05T06:34:29.139Z] =================================================================================================================== 00:09:33.673 [2024-12-05T06:34:29.139Z] Total : 6421.11 25.08 0.00 0.00 19927.42 7060.01 81502.95 00:09:33.673 0 00:09:33.673 06:34:28 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72788 00:09:33.673 06:34:28 -- common/autotest_common.sh@936 -- # '[' -z 72788 ']' 00:09:33.673 06:34:28 -- common/autotest_common.sh@940 -- # kill -0 72788 00:09:33.673 06:34:28 -- common/autotest_common.sh@941 -- # uname 00:09:33.673 06:34:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:33.673 06:34:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72788 00:09:33.673 killing process with pid 72788 00:09:33.673 Received shutdown signal, test time was about 10.000000 seconds 00:09:33.673 00:09:33.673 Latency(us) 00:09:33.673 [2024-12-05T06:34:29.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.673 [2024-12-05T06:34:29.139Z] =================================================================================================================== 00:09:33.673 [2024-12-05T06:34:29.139Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:33.673 06:34:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:33.673 06:34:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:33.673 06:34:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72788' 00:09:33.673 06:34:28 -- common/autotest_common.sh@955 -- # kill 72788 00:09:33.673 06:34:28 -- common/autotest_common.sh@960 -- # wait 72788 00:09:33.933 06:34:29 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:33.933 06:34:29 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1102046a-c58e-4def-97b1-c55ddb6d02df 00:09:33.933 06:34:29 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:34.192 06:34:29 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:34.192 06:34:29 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:09:34.192 06:34:29 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72431 00:09:34.192 06:34:29 -- target/nvmf_lvs_grow.sh@74 -- # wait 72431 00:09:34.452 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72431 Killed "${NVMF_APP[@]}" "$@" 00:09:34.452 06:34:29 -- target/nvmf_lvs_grow.sh@74 -- # true 00:09:34.452 06:34:29 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:09:34.452 06:34:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:34.452 06:34:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:34.452 06:34:29 -- common/autotest_common.sh@10 -- # set +x 00:09:34.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.452 06:34:29 -- nvmf/common.sh@469 -- # nvmfpid=72943 00:09:34.452 06:34:29 -- nvmf/common.sh@470 -- # waitforlisten 72943 00:09:34.452 06:34:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:34.452 06:34:29 -- common/autotest_common.sh@829 -- # '[' -z 72943 ']' 00:09:34.452 06:34:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.452 06:34:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.452 06:34:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.452 06:34:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.452 06:34:29 -- common/autotest_common.sh@10 -- # set +x 00:09:34.452 [2024-12-05 06:34:29.747846] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:34.452 [2024-12-05 06:34:29.748164] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.452 [2024-12-05 06:34:29.887758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.763 [2024-12-05 06:34:29.918932] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:34.763 [2024-12-05 06:34:29.919120] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.763 [2024-12-05 06:34:29.919133] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.763 [2024-12-05 06:34:29.919141] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.763 [2024-12-05 06:34:29.919169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.333 06:34:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.333 06:34:30 -- common/autotest_common.sh@862 -- # return 0 00:09:35.333 06:34:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:35.333 06:34:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:35.333 06:34:30 -- common/autotest_common.sh@10 -- # set +x 00:09:35.333 06:34:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.333 06:34:30 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:35.591 [2024-12-05 06:34:31.037014] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:35.591 [2024-12-05 06:34:31.037420] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:35.591 [2024-12-05 06:34:31.037728] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:35.850 06:34:31 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:09:35.850 06:34:31 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev b20d5eb9-dc64-4b77-8bf9-5f08060ae6eb 00:09:35.850 06:34:31 -- common/autotest_common.sh@897 -- # local bdev_name=b20d5eb9-dc64-4b77-8bf9-5f08060ae6eb 00:09:35.850 06:34:31 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:35.850 06:34:31 -- common/autotest_common.sh@899 -- # local i 00:09:35.850 06:34:31 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:35.850 06:34:31 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:35.850 06:34:31 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:36.108 06:34:31 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b20d5eb9-dc64-4b77-8bf9-5f08060ae6eb -t 2000 00:09:36.375 [ 00:09:36.375 { 00:09:36.375 "name": "b20d5eb9-dc64-4b77-8bf9-5f08060ae6eb", 00:09:36.375 "aliases": [ 00:09:36.375 "lvs/lvol" 00:09:36.375 ], 00:09:36.375 "product_name": "Logical Volume", 00:09:36.375 "block_size": 4096, 00:09:36.375 "num_blocks": 38912, 00:09:36.375 "uuid": "b20d5eb9-dc64-4b77-8bf9-5f08060ae6eb", 00:09:36.375 "assigned_rate_limits": { 00:09:36.375 "rw_ios_per_sec": 0, 00:09:36.375 "rw_mbytes_per_sec": 0, 00:09:36.375 "r_mbytes_per_sec": 0, 00:09:36.375 "w_mbytes_per_sec": 0 00:09:36.375 }, 00:09:36.375 "claimed": false, 00:09:36.375 "zoned": false, 00:09:36.375 "supported_io_types": { 00:09:36.375 "read": true, 00:09:36.375 "write": true, 00:09:36.375 "unmap": true, 00:09:36.375 "write_zeroes": true, 00:09:36.375 "flush": false, 00:09:36.375 "reset": true, 00:09:36.375 "compare": false, 00:09:36.375 "compare_and_write": false, 00:09:36.375 "abort": false, 00:09:36.375 "nvme_admin": false, 00:09:36.375 "nvme_io": false 00:09:36.375 }, 00:09:36.375 "driver_specific": { 00:09:36.375 "lvol": { 00:09:36.375 "lvol_store_uuid": "1102046a-c58e-4def-97b1-c55ddb6d02df", 00:09:36.375 "base_bdev": "aio_bdev", 00:09:36.375 "thin_provision": false, 00:09:36.375 "snapshot": false, 00:09:36.375 "clone": false, 00:09:36.375 "esnap_clone": false 00:09:36.375 } 00:09:36.375 } 00:09:36.375 } 00:09:36.375 ] 00:09:36.375 06:34:31 -- common/autotest_common.sh@905 -- # return 0 00:09:36.375 06:34:31 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:09:36.375 06:34:31 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1102046a-c58e-4def-97b1-c55ddb6d02df 00:09:36.651 06:34:31 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:09:36.651 06:34:31 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1102046a-c58e-4def-97b1-c55ddb6d02df 00:09:36.651 06:34:31 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:09:36.910 06:34:32 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:09:36.910 06:34:32 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:36.910 [2024-12-05 06:34:32.330764] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:36.910 06:34:32 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1102046a-c58e-4def-97b1-c55ddb6d02df 00:09:36.910 06:34:32 -- common/autotest_common.sh@650 -- # local es=0 00:09:36.910 06:34:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1102046a-c58e-4def-97b1-c55ddb6d02df 00:09:36.910 06:34:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.910 06:34:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.910 06:34:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.910 06:34:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.910 06:34:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.910 06:34:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.910 06:34:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.910 06:34:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:36.910 06:34:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1102046a-c58e-4def-97b1-c55ddb6d02df 00:09:37.478 request: 00:09:37.478 { 00:09:37.478 "uuid": "1102046a-c58e-4def-97b1-c55ddb6d02df", 00:09:37.478 "method": "bdev_lvol_get_lvstores", 00:09:37.478 "req_id": 1 00:09:37.478 } 00:09:37.478 Got JSON-RPC error response 00:09:37.478 response: 00:09:37.478 { 00:09:37.478 "code": -19, 00:09:37.478 "message": "No such device" 00:09:37.478 } 00:09:37.478 06:34:32 -- common/autotest_common.sh@653 -- # es=1 00:09:37.478 06:34:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:37.478 06:34:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:37.478 06:34:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:37.478 06:34:32 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:37.478 aio_bdev 00:09:37.478 06:34:32 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b20d5eb9-dc64-4b77-8bf9-5f08060ae6eb 00:09:37.478 06:34:32 -- common/autotest_common.sh@897 -- # local bdev_name=b20d5eb9-dc64-4b77-8bf9-5f08060ae6eb 00:09:37.478 06:34:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:37.478 06:34:32 -- common/autotest_common.sh@899 -- # local i 00:09:37.478 06:34:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:37.478 06:34:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:37.478 06:34:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:37.737 06:34:33 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b20d5eb9-dc64-4b77-8bf9-5f08060ae6eb -t 2000 00:09:37.996 [ 00:09:37.996 { 00:09:37.996 "name": "b20d5eb9-dc64-4b77-8bf9-5f08060ae6eb", 00:09:37.996 "aliases": [ 00:09:37.996 "lvs/lvol" 00:09:37.996 ], 00:09:37.996 "product_name": "Logical Volume", 00:09:37.996 "block_size": 4096, 00:09:37.996 "num_blocks": 38912, 00:09:37.996 "uuid": "b20d5eb9-dc64-4b77-8bf9-5f08060ae6eb", 00:09:37.996 "assigned_rate_limits": { 00:09:37.996 "rw_ios_per_sec": 0, 00:09:37.996 "rw_mbytes_per_sec": 0, 00:09:37.996 "r_mbytes_per_sec": 0, 00:09:37.996 "w_mbytes_per_sec": 0 00:09:37.996 }, 00:09:37.996 "claimed": false, 00:09:37.996 "zoned": false, 00:09:37.996 "supported_io_types": { 00:09:37.996 "read": true, 00:09:37.996 "write": true, 00:09:37.996 "unmap": true, 00:09:37.996 "write_zeroes": true, 00:09:37.996 "flush": false, 00:09:37.996 "reset": true, 00:09:37.996 "compare": false, 00:09:37.996 "compare_and_write": false, 00:09:37.996 "abort": false, 00:09:37.997 "nvme_admin": false, 00:09:37.997 "nvme_io": false 00:09:37.997 }, 00:09:37.997 "driver_specific": { 00:09:37.997 "lvol": { 00:09:37.997 "lvol_store_uuid": "1102046a-c58e-4def-97b1-c55ddb6d02df", 00:09:37.997 "base_bdev": "aio_bdev", 00:09:37.997 "thin_provision": false, 00:09:37.997 "snapshot": false, 00:09:37.997 "clone": false, 00:09:37.997 "esnap_clone": false 00:09:37.997 } 00:09:37.997 } 00:09:37.997 } 00:09:37.997 ] 00:09:37.997 06:34:33 -- common/autotest_common.sh@905 -- # return 0 00:09:37.997 06:34:33 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1102046a-c58e-4def-97b1-c55ddb6d02df 00:09:37.997 06:34:33 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:38.255 06:34:33 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:38.255 06:34:33 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1102046a-c58e-4def-97b1-c55ddb6d02df 00:09:38.255 06:34:33 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:38.514 06:34:33 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:38.514 06:34:33 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b20d5eb9-dc64-4b77-8bf9-5f08060ae6eb 00:09:38.773 06:34:34 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1102046a-c58e-4def-97b1-c55ddb6d02df 00:09:39.032 06:34:34 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:39.291 06:34:34 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:39.550 00:09:39.550 real 0m20.831s 00:09:39.550 user 0m42.594s 00:09:39.550 sys 0m9.240s 00:09:39.550 ************************************ 00:09:39.550 END TEST lvs_grow_dirty 00:09:39.550 ************************************ 00:09:39.550 06:34:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:39.550 06:34:34 -- common/autotest_common.sh@10 -- # set +x 00:09:39.550 06:34:34 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:39.550 06:34:34 -- common/autotest_common.sh@806 -- # type=--id 00:09:39.550 06:34:34 -- common/autotest_common.sh@807 -- # id=0 00:09:39.550 06:34:34 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:39.550 06:34:34 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:39.550 06:34:34 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:39.550 06:34:34 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:39.550 06:34:34 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:39.550 06:34:34 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:39.550 nvmf_trace.0 00:09:39.809 06:34:35 -- common/autotest_common.sh@821 -- # return 0 00:09:39.809 06:34:35 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:39.809 06:34:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:39.809 06:34:35 -- nvmf/common.sh@116 -- # sync 00:09:39.809 06:34:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:39.809 06:34:35 -- nvmf/common.sh@119 -- # set +e 00:09:39.809 06:34:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:39.809 06:34:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:39.809 rmmod nvme_tcp 00:09:39.809 rmmod nvme_fabrics 00:09:39.809 rmmod nvme_keyring 00:09:39.809 06:34:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:39.809 06:34:35 -- nvmf/common.sh@123 -- # set -e 00:09:39.809 06:34:35 -- nvmf/common.sh@124 -- # return 0 00:09:39.809 06:34:35 -- nvmf/common.sh@477 -- # '[' -n 72943 ']' 00:09:39.809 06:34:35 -- nvmf/common.sh@478 -- # killprocess 72943 00:09:39.809 06:34:35 -- common/autotest_common.sh@936 -- # '[' -z 72943 ']' 00:09:39.809 06:34:35 -- common/autotest_common.sh@940 -- # kill -0 72943 00:09:39.809 06:34:35 -- common/autotest_common.sh@941 -- # uname 00:09:39.809 06:34:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:39.809 06:34:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72943 00:09:39.809 killing process with pid 72943 00:09:39.809 06:34:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:39.809 06:34:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:39.809 06:34:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72943' 00:09:39.809 06:34:35 -- common/autotest_common.sh@955 -- # kill 72943 00:09:39.809 06:34:35 -- common/autotest_common.sh@960 -- # wait 72943 00:09:40.067 06:34:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:40.067 06:34:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:40.067 06:34:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:40.067 06:34:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:40.067 06:34:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:40.067 06:34:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.067 06:34:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.067 06:34:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.067 06:34:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:40.067 ************************************ 00:09:40.067 END TEST nvmf_lvs_grow 00:09:40.067 ************************************ 00:09:40.067 00:09:40.067 real 0m41.541s 00:09:40.067 user 1m6.514s 00:09:40.067 sys 0m12.241s 00:09:40.067 06:34:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:40.067 06:34:35 -- common/autotest_common.sh@10 -- # set +x 00:09:40.067 06:34:35 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:40.067 06:34:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:40.067 06:34:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:40.067 06:34:35 -- common/autotest_common.sh@10 -- # set +x 00:09:40.067 ************************************ 00:09:40.067 START TEST nvmf_bdev_io_wait 00:09:40.067 ************************************ 00:09:40.067 06:34:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:40.327 * Looking for test storage... 00:09:40.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:40.327 06:34:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:40.327 06:34:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:40.327 06:34:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:40.327 06:34:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:40.327 06:34:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:40.327 06:34:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:40.327 06:34:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:40.327 06:34:35 -- scripts/common.sh@335 -- # IFS=.-: 00:09:40.327 06:34:35 -- scripts/common.sh@335 -- # read -ra ver1 00:09:40.327 06:34:35 -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.327 06:34:35 -- scripts/common.sh@336 -- # read -ra ver2 00:09:40.327 06:34:35 -- scripts/common.sh@337 -- # local 'op=<' 00:09:40.327 06:34:35 -- scripts/common.sh@339 -- # ver1_l=2 00:09:40.327 06:34:35 -- scripts/common.sh@340 -- # ver2_l=1 00:09:40.327 06:34:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:40.327 06:34:35 -- scripts/common.sh@343 -- # case "$op" in 00:09:40.327 06:34:35 -- scripts/common.sh@344 -- # : 1 00:09:40.327 06:34:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:40.327 06:34:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.327 06:34:35 -- scripts/common.sh@364 -- # decimal 1 00:09:40.327 06:34:35 -- scripts/common.sh@352 -- # local d=1 00:09:40.327 06:34:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.327 06:34:35 -- scripts/common.sh@354 -- # echo 1 00:09:40.327 06:34:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:40.327 06:34:35 -- scripts/common.sh@365 -- # decimal 2 00:09:40.327 06:34:35 -- scripts/common.sh@352 -- # local d=2 00:09:40.327 06:34:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.327 06:34:35 -- scripts/common.sh@354 -- # echo 2 00:09:40.327 06:34:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:40.327 06:34:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:40.327 06:34:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:40.327 06:34:35 -- scripts/common.sh@367 -- # return 0 00:09:40.327 06:34:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.327 06:34:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:40.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.327 --rc genhtml_branch_coverage=1 00:09:40.327 --rc genhtml_function_coverage=1 00:09:40.327 --rc genhtml_legend=1 00:09:40.327 --rc geninfo_all_blocks=1 00:09:40.327 --rc geninfo_unexecuted_blocks=1 00:09:40.327 00:09:40.327 ' 00:09:40.327 06:34:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:40.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.327 --rc genhtml_branch_coverage=1 00:09:40.327 --rc genhtml_function_coverage=1 00:09:40.327 --rc genhtml_legend=1 00:09:40.327 --rc geninfo_all_blocks=1 00:09:40.327 --rc geninfo_unexecuted_blocks=1 00:09:40.327 00:09:40.327 ' 00:09:40.327 06:34:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:40.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.327 --rc genhtml_branch_coverage=1 00:09:40.327 --rc genhtml_function_coverage=1 00:09:40.327 --rc genhtml_legend=1 00:09:40.327 --rc geninfo_all_blocks=1 00:09:40.327 --rc geninfo_unexecuted_blocks=1 00:09:40.327 00:09:40.327 ' 00:09:40.327 06:34:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:40.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.327 --rc genhtml_branch_coverage=1 00:09:40.327 --rc genhtml_function_coverage=1 00:09:40.327 --rc genhtml_legend=1 00:09:40.327 --rc geninfo_all_blocks=1 00:09:40.327 --rc geninfo_unexecuted_blocks=1 00:09:40.327 00:09:40.327 ' 00:09:40.327 06:34:35 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.327 06:34:35 -- nvmf/common.sh@7 -- # uname -s 00:09:40.327 06:34:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.327 06:34:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.327 06:34:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.327 06:34:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.327 06:34:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.327 06:34:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.327 06:34:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.327 06:34:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.327 06:34:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.327 06:34:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.327 06:34:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:09:40.327 06:34:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:09:40.327 06:34:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.327 06:34:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.327 06:34:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:40.327 06:34:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.327 06:34:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.327 06:34:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.327 06:34:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.327 06:34:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.327 06:34:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.327 06:34:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.327 06:34:35 -- paths/export.sh@5 -- # export PATH 00:09:40.327 06:34:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.327 06:34:35 -- nvmf/common.sh@46 -- # : 0 00:09:40.327 06:34:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:40.327 06:34:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:40.327 06:34:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:40.327 06:34:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.327 06:34:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.327 06:34:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:40.327 06:34:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:40.327 06:34:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:40.327 06:34:35 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.327 06:34:35 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.327 06:34:35 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:40.327 06:34:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:40.327 06:34:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.327 06:34:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:40.327 06:34:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:40.327 06:34:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:40.327 06:34:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.327 06:34:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.327 06:34:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.327 06:34:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:40.327 06:34:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:40.327 06:34:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:40.327 06:34:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:40.327 06:34:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:40.327 06:34:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:40.328 06:34:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.328 06:34:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.328 06:34:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:40.328 06:34:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:40.328 06:34:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:40.328 06:34:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:40.328 06:34:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:40.328 06:34:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.328 06:34:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:40.328 06:34:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:40.328 06:34:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:40.328 06:34:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:40.328 06:34:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:40.328 06:34:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:40.328 Cannot find device "nvmf_tgt_br" 00:09:40.328 06:34:35 -- nvmf/common.sh@154 -- # true 00:09:40.328 06:34:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.328 Cannot find device "nvmf_tgt_br2" 00:09:40.328 06:34:35 -- nvmf/common.sh@155 -- # true 00:09:40.328 06:34:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:40.328 06:34:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:40.328 Cannot find device "nvmf_tgt_br" 00:09:40.328 06:34:35 -- nvmf/common.sh@157 -- # true 00:09:40.328 06:34:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:40.328 Cannot find device "nvmf_tgt_br2" 00:09:40.328 06:34:35 -- nvmf/common.sh@158 -- # true 00:09:40.328 06:34:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:40.587 06:34:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:40.587 06:34:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:40.587 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.587 06:34:35 -- nvmf/common.sh@161 -- # true 00:09:40.587 06:34:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:40.587 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.587 06:34:35 -- nvmf/common.sh@162 -- # true 00:09:40.587 06:34:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:40.587 06:34:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:40.587 06:34:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:40.587 06:34:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:40.587 06:34:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:40.587 06:34:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:40.587 06:34:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:40.587 06:34:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:40.587 06:34:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:40.587 06:34:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:40.587 06:34:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:40.587 06:34:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:40.587 06:34:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:40.587 06:34:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:40.587 06:34:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:40.587 06:34:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:40.587 06:34:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:40.587 06:34:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:40.587 06:34:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:40.587 06:34:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:40.587 06:34:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:40.587 06:34:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:40.587 06:34:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:40.587 06:34:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:40.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:09:40.587 00:09:40.587 --- 10.0.0.2 ping statistics --- 00:09:40.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.587 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:40.587 06:34:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:40.587 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:40.587 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:09:40.587 00:09:40.587 --- 10.0.0.3 ping statistics --- 00:09:40.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.587 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:40.587 06:34:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:40.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:09:40.587 00:09:40.587 --- 10.0.0.1 ping statistics --- 00:09:40.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.587 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:40.587 06:34:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.587 06:34:36 -- nvmf/common.sh@421 -- # return 0 00:09:40.587 06:34:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:40.587 06:34:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.587 06:34:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:40.587 06:34:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:40.587 06:34:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.587 06:34:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:40.587 06:34:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:40.587 06:34:36 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:40.587 06:34:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:40.587 06:34:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:40.587 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:09:40.587 06:34:36 -- nvmf/common.sh@469 -- # nvmfpid=73262 00:09:40.587 06:34:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:40.587 06:34:36 -- nvmf/common.sh@470 -- # waitforlisten 73262 00:09:40.587 06:34:36 -- common/autotest_common.sh@829 -- # '[' -z 73262 ']' 00:09:40.587 06:34:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.587 06:34:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.587 06:34:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.587 06:34:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.587 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:09:40.846 [2024-12-05 06:34:36.082795] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:40.846 [2024-12-05 06:34:36.082896] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.846 [2024-12-05 06:34:36.225162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.846 [2024-12-05 06:34:36.269201] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:40.846 [2024-12-05 06:34:36.269409] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.846 [2024-12-05 06:34:36.269429] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.846 [2024-12-05 06:34:36.269440] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.846 [2024-12-05 06:34:36.269619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.846 [2024-12-05 06:34:36.269772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.846 [2024-12-05 06:34:36.270372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.846 [2024-12-05 06:34:36.270380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.105 06:34:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.105 06:34:36 -- common/autotest_common.sh@862 -- # return 0 00:09:41.105 06:34:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:41.105 06:34:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:41.105 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:09:41.105 06:34:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:41.105 06:34:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.105 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:09:41.105 06:34:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:41.105 06:34:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.105 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:09:41.105 06:34:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.105 06:34:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.105 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:09:41.105 [2024-12-05 06:34:36.426954] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.105 06:34:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:41.105 06:34:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.105 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:09:41.105 Malloc0 00:09:41.105 06:34:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:41.105 06:34:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.105 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:09:41.105 06:34:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.105 06:34:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.105 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:09:41.105 06:34:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.105 06:34:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.105 06:34:36 -- common/autotest_common.sh@10 -- # set +x 00:09:41.105 [2024-12-05 06:34:36.488574] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.105 06:34:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73290 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@30 -- # READ_PID=73292 00:09:41.105 06:34:36 -- nvmf/common.sh@520 -- # config=() 00:09:41.105 06:34:36 -- nvmf/common.sh@520 -- # local subsystem config 00:09:41.105 06:34:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:41.105 06:34:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:41.105 { 00:09:41.105 "params": { 00:09:41.105 "name": "Nvme$subsystem", 00:09:41.105 "trtype": "$TEST_TRANSPORT", 00:09:41.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.105 "adrfam": "ipv4", 00:09:41.105 "trsvcid": "$NVMF_PORT", 00:09:41.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.105 "hdgst": ${hdgst:-false}, 00:09:41.105 "ddgst": ${ddgst:-false} 00:09:41.105 }, 00:09:41.105 "method": "bdev_nvme_attach_controller" 00:09:41.105 } 00:09:41.105 EOF 00:09:41.105 )") 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73294 00:09:41.105 06:34:36 -- nvmf/common.sh@520 -- # config=() 00:09:41.105 06:34:36 -- nvmf/common.sh@520 -- # local subsystem config 00:09:41.105 06:34:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:41.105 06:34:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:41.105 { 00:09:41.105 "params": { 00:09:41.105 "name": "Nvme$subsystem", 00:09:41.105 "trtype": "$TEST_TRANSPORT", 00:09:41.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.105 "adrfam": "ipv4", 00:09:41.105 "trsvcid": "$NVMF_PORT", 00:09:41.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.105 "hdgst": ${hdgst:-false}, 00:09:41.105 "ddgst": ${ddgst:-false} 00:09:41.105 }, 00:09:41.105 "method": "bdev_nvme_attach_controller" 00:09:41.105 } 00:09:41.105 EOF 00:09:41.105 )") 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73297 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:41.105 06:34:36 -- nvmf/common.sh@542 -- # cat 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@35 -- # sync 00:09:41.105 06:34:36 -- nvmf/common.sh@542 -- # cat 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:41.105 06:34:36 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:41.105 06:34:36 -- nvmf/common.sh@520 -- # config=() 00:09:41.105 06:34:36 -- nvmf/common.sh@520 -- # local subsystem config 00:09:41.105 06:34:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:41.105 06:34:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:41.105 { 00:09:41.105 "params": { 00:09:41.105 "name": "Nvme$subsystem", 00:09:41.105 "trtype": "$TEST_TRANSPORT", 00:09:41.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.105 "adrfam": "ipv4", 00:09:41.105 "trsvcid": "$NVMF_PORT", 00:09:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.106 "hdgst": ${hdgst:-false}, 00:09:41.106 "ddgst": ${ddgst:-false} 00:09:41.106 }, 00:09:41.106 "method": "bdev_nvme_attach_controller" 00:09:41.106 } 00:09:41.106 EOF 00:09:41.106 )") 00:09:41.106 06:34:36 -- nvmf/common.sh@544 -- # jq . 00:09:41.106 06:34:36 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:41.106 06:34:36 -- nvmf/common.sh@542 -- # cat 00:09:41.106 06:34:36 -- nvmf/common.sh@520 -- # config=() 00:09:41.106 06:34:36 -- nvmf/common.sh@520 -- # local subsystem config 00:09:41.106 06:34:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:41.106 06:34:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:41.106 { 00:09:41.106 "params": { 00:09:41.106 "name": "Nvme$subsystem", 00:09:41.106 "trtype": "$TEST_TRANSPORT", 00:09:41.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.106 "adrfam": "ipv4", 00:09:41.106 "trsvcid": "$NVMF_PORT", 00:09:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.106 "hdgst": ${hdgst:-false}, 00:09:41.106 "ddgst": ${ddgst:-false} 00:09:41.106 }, 00:09:41.106 "method": "bdev_nvme_attach_controller" 00:09:41.106 } 00:09:41.106 EOF 00:09:41.106 )") 00:09:41.106 06:34:36 -- nvmf/common.sh@545 -- # IFS=, 00:09:41.106 06:34:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:41.106 "params": { 00:09:41.106 "name": "Nvme1", 00:09:41.106 "trtype": "tcp", 00:09:41.106 "traddr": "10.0.0.2", 00:09:41.106 "adrfam": "ipv4", 00:09:41.106 "trsvcid": "4420", 00:09:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.106 "hdgst": false, 00:09:41.106 "ddgst": false 00:09:41.106 }, 00:09:41.106 "method": "bdev_nvme_attach_controller" 00:09:41.106 }' 00:09:41.106 06:34:36 -- nvmf/common.sh@544 -- # jq . 00:09:41.106 06:34:36 -- nvmf/common.sh@542 -- # cat 00:09:41.106 06:34:36 -- nvmf/common.sh@545 -- # IFS=, 00:09:41.106 06:34:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:41.106 "params": { 00:09:41.106 "name": "Nvme1", 00:09:41.106 "trtype": "tcp", 00:09:41.106 "traddr": "10.0.0.2", 00:09:41.106 "adrfam": "ipv4", 00:09:41.106 "trsvcid": "4420", 00:09:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.106 "hdgst": false, 00:09:41.106 "ddgst": false 00:09:41.106 }, 00:09:41.106 "method": "bdev_nvme_attach_controller" 00:09:41.106 }' 00:09:41.106 06:34:36 -- nvmf/common.sh@544 -- # jq . 00:09:41.106 06:34:36 -- nvmf/common.sh@545 -- # IFS=, 00:09:41.106 06:34:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:41.106 "params": { 00:09:41.106 "name": "Nvme1", 00:09:41.106 "trtype": "tcp", 00:09:41.106 "traddr": "10.0.0.2", 00:09:41.106 "adrfam": "ipv4", 00:09:41.106 "trsvcid": "4420", 00:09:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.106 "hdgst": false, 00:09:41.106 "ddgst": false 00:09:41.106 }, 00:09:41.106 "method": "bdev_nvme_attach_controller" 00:09:41.106 }' 00:09:41.106 06:34:36 -- nvmf/common.sh@544 -- # jq . 00:09:41.106 06:34:36 -- nvmf/common.sh@545 -- # IFS=, 00:09:41.106 06:34:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:41.106 "params": { 00:09:41.106 "name": "Nvme1", 00:09:41.106 "trtype": "tcp", 00:09:41.106 "traddr": "10.0.0.2", 00:09:41.106 "adrfam": "ipv4", 00:09:41.106 "trsvcid": "4420", 00:09:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.106 "hdgst": false, 00:09:41.106 "ddgst": false 00:09:41.106 }, 00:09:41.106 "method": "bdev_nvme_attach_controller" 00:09:41.106 }' 00:09:41.106 [2024-12-05 06:34:36.547109] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:41.106 [2024-12-05 06:34:36.547203] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:41.106 [2024-12-05 06:34:36.558231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:41.106 06:34:36 -- target/bdev_io_wait.sh@37 -- # wait 73290 00:09:41.106 [2024-12-05 06:34:36.558305] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:41.364 [2024-12-05 06:34:36.577896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:41.364 [2024-12-05 06:34:36.578361] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:41.364 [2024-12-05 06:34:36.587858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:41.364 [2024-12-05 06:34:36.587968] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:41.364 [2024-12-05 06:34:36.726882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.364 [2024-12-05 06:34:36.748899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:41.364 [2024-12-05 06:34:36.764484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.364 [2024-12-05 06:34:36.792032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:41.364 [2024-12-05 06:34:36.813872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.621 [2024-12-05 06:34:36.838479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:09:41.621 Running I/O for 1 seconds... 00:09:41.621 [2024-12-05 06:34:36.863636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.621 [2024-12-05 06:34:36.884043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:41.621 Running I/O for 1 seconds... 00:09:41.621 Running I/O for 1 seconds... 00:09:41.621 Running I/O for 1 seconds... 00:09:42.555 00:09:42.555 Latency(us) 00:09:42.555 [2024-12-05T06:34:38.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.555 [2024-12-05T06:34:38.021Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:42.555 Nvme1n1 : 1.00 172449.47 673.63 0.00 0.00 739.73 357.47 1318.17 00:09:42.555 [2024-12-05T06:34:38.021Z] =================================================================================================================== 00:09:42.555 [2024-12-05T06:34:38.021Z] Total : 172449.47 673.63 0.00 0.00 739.73 357.47 1318.17 00:09:42.555 00:09:42.555 Latency(us) 00:09:42.555 [2024-12-05T06:34:38.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.555 [2024-12-05T06:34:38.021Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:42.555 Nvme1n1 : 1.01 10639.61 41.56 0.00 0.00 11983.54 6881.28 21328.99 00:09:42.555 [2024-12-05T06:34:38.021Z] =================================================================================================================== 00:09:42.555 [2024-12-05T06:34:38.021Z] Total : 10639.61 41.56 0.00 0.00 11983.54 6881.28 21328.99 00:09:42.555 00:09:42.555 Latency(us) 00:09:42.555 [2024-12-05T06:34:38.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.555 [2024-12-05T06:34:38.021Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:42.555 Nvme1n1 : 1.01 8930.39 34.88 0.00 0.00 14273.11 7447.27 25380.31 00:09:42.555 [2024-12-05T06:34:38.021Z] =================================================================================================================== 00:09:42.555 [2024-12-05T06:34:38.021Z] Total : 8930.39 34.88 0.00 0.00 14273.11 7447.27 25380.31 00:09:42.813 00:09:42.813 Latency(us) 00:09:42.813 [2024-12-05T06:34:38.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.813 [2024-12-05T06:34:38.279Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:42.813 Nvme1n1 : 1.01 7503.87 29.31 0.00 0.00 16965.45 10009.13 28835.84 00:09:42.813 [2024-12-05T06:34:38.279Z] =================================================================================================================== 00:09:42.813 [2024-12-05T06:34:38.279Z] Total : 7503.87 29.31 0.00 0.00 16965.45 10009.13 28835.84 00:09:42.813 06:34:38 -- target/bdev_io_wait.sh@38 -- # wait 73292 00:09:42.813 06:34:38 -- target/bdev_io_wait.sh@39 -- # wait 73294 00:09:42.813 06:34:38 -- target/bdev_io_wait.sh@40 -- # wait 73297 00:09:42.813 06:34:38 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:42.813 06:34:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.813 06:34:38 -- common/autotest_common.sh@10 -- # set +x 00:09:42.813 06:34:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.813 06:34:38 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:42.813 06:34:38 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:42.813 06:34:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:42.813 06:34:38 -- nvmf/common.sh@116 -- # sync 00:09:42.813 06:34:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:42.813 06:34:38 -- nvmf/common.sh@119 -- # set +e 00:09:42.813 06:34:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:42.813 06:34:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:42.813 rmmod nvme_tcp 00:09:42.813 rmmod nvme_fabrics 00:09:42.813 rmmod nvme_keyring 00:09:42.813 06:34:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:42.813 06:34:38 -- nvmf/common.sh@123 -- # set -e 00:09:42.813 06:34:38 -- nvmf/common.sh@124 -- # return 0 00:09:42.813 06:34:38 -- nvmf/common.sh@477 -- # '[' -n 73262 ']' 00:09:42.813 06:34:38 -- nvmf/common.sh@478 -- # killprocess 73262 00:09:42.813 06:34:38 -- common/autotest_common.sh@936 -- # '[' -z 73262 ']' 00:09:42.813 06:34:38 -- common/autotest_common.sh@940 -- # kill -0 73262 00:09:42.813 06:34:38 -- common/autotest_common.sh@941 -- # uname 00:09:42.813 06:34:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:42.813 06:34:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73262 00:09:43.071 06:34:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:43.072 06:34:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:43.072 killing process with pid 73262 00:09:43.072 06:34:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73262' 00:09:43.072 06:34:38 -- common/autotest_common.sh@955 -- # kill 73262 00:09:43.072 06:34:38 -- common/autotest_common.sh@960 -- # wait 73262 00:09:43.072 06:34:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:43.072 06:34:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:43.072 06:34:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:43.072 06:34:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:43.072 06:34:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:43.072 06:34:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.072 06:34:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.072 06:34:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.072 06:34:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:43.072 00:09:43.072 real 0m2.998s 00:09:43.072 user 0m12.627s 00:09:43.072 sys 0m2.008s 00:09:43.072 06:34:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:43.072 06:34:38 -- common/autotest_common.sh@10 -- # set +x 00:09:43.072 ************************************ 00:09:43.072 END TEST nvmf_bdev_io_wait 00:09:43.072 ************************************ 00:09:43.072 06:34:38 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:43.072 06:34:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:43.072 06:34:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:43.072 06:34:38 -- common/autotest_common.sh@10 -- # set +x 00:09:43.072 ************************************ 00:09:43.072 START TEST nvmf_queue_depth 00:09:43.072 ************************************ 00:09:43.072 06:34:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:43.331 * Looking for test storage... 00:09:43.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:43.331 06:34:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:43.331 06:34:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:43.331 06:34:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:43.331 06:34:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:43.331 06:34:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:43.331 06:34:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:43.331 06:34:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:43.331 06:34:38 -- scripts/common.sh@335 -- # IFS=.-: 00:09:43.331 06:34:38 -- scripts/common.sh@335 -- # read -ra ver1 00:09:43.331 06:34:38 -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.331 06:34:38 -- scripts/common.sh@336 -- # read -ra ver2 00:09:43.331 06:34:38 -- scripts/common.sh@337 -- # local 'op=<' 00:09:43.331 06:34:38 -- scripts/common.sh@339 -- # ver1_l=2 00:09:43.331 06:34:38 -- scripts/common.sh@340 -- # ver2_l=1 00:09:43.331 06:34:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:43.331 06:34:38 -- scripts/common.sh@343 -- # case "$op" in 00:09:43.331 06:34:38 -- scripts/common.sh@344 -- # : 1 00:09:43.331 06:34:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:43.331 06:34:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.331 06:34:38 -- scripts/common.sh@364 -- # decimal 1 00:09:43.331 06:34:38 -- scripts/common.sh@352 -- # local d=1 00:09:43.331 06:34:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.331 06:34:38 -- scripts/common.sh@354 -- # echo 1 00:09:43.331 06:34:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:43.331 06:34:38 -- scripts/common.sh@365 -- # decimal 2 00:09:43.331 06:34:38 -- scripts/common.sh@352 -- # local d=2 00:09:43.331 06:34:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.331 06:34:38 -- scripts/common.sh@354 -- # echo 2 00:09:43.331 06:34:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:43.331 06:34:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:43.331 06:34:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:43.331 06:34:38 -- scripts/common.sh@367 -- # return 0 00:09:43.331 06:34:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.331 06:34:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:43.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.331 --rc genhtml_branch_coverage=1 00:09:43.331 --rc genhtml_function_coverage=1 00:09:43.331 --rc genhtml_legend=1 00:09:43.331 --rc geninfo_all_blocks=1 00:09:43.331 --rc geninfo_unexecuted_blocks=1 00:09:43.331 00:09:43.331 ' 00:09:43.331 06:34:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:43.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.331 --rc genhtml_branch_coverage=1 00:09:43.331 --rc genhtml_function_coverage=1 00:09:43.331 --rc genhtml_legend=1 00:09:43.331 --rc geninfo_all_blocks=1 00:09:43.331 --rc geninfo_unexecuted_blocks=1 00:09:43.331 00:09:43.331 ' 00:09:43.331 06:34:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:43.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.331 --rc genhtml_branch_coverage=1 00:09:43.331 --rc genhtml_function_coverage=1 00:09:43.331 --rc genhtml_legend=1 00:09:43.331 --rc geninfo_all_blocks=1 00:09:43.331 --rc geninfo_unexecuted_blocks=1 00:09:43.331 00:09:43.331 ' 00:09:43.331 06:34:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:43.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.331 --rc genhtml_branch_coverage=1 00:09:43.331 --rc genhtml_function_coverage=1 00:09:43.331 --rc genhtml_legend=1 00:09:43.331 --rc geninfo_all_blocks=1 00:09:43.331 --rc geninfo_unexecuted_blocks=1 00:09:43.331 00:09:43.331 ' 00:09:43.331 06:34:38 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:43.331 06:34:38 -- nvmf/common.sh@7 -- # uname -s 00:09:43.331 06:34:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.331 06:34:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.331 06:34:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.331 06:34:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.331 06:34:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.331 06:34:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.331 06:34:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.331 06:34:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.331 06:34:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.331 06:34:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.331 06:34:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:09:43.331 06:34:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:09:43.331 06:34:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.331 06:34:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.331 06:34:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:43.331 06:34:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.331 06:34:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.331 06:34:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.331 06:34:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.331 06:34:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.331 06:34:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.332 06:34:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.332 06:34:38 -- paths/export.sh@5 -- # export PATH 00:09:43.332 06:34:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.332 06:34:38 -- nvmf/common.sh@46 -- # : 0 00:09:43.332 06:34:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:43.332 06:34:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:43.332 06:34:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:43.332 06:34:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.332 06:34:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.332 06:34:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:43.332 06:34:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:43.332 06:34:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:43.332 06:34:38 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:43.332 06:34:38 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:43.332 06:34:38 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:43.332 06:34:38 -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:43.332 06:34:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:43.332 06:34:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.332 06:34:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:43.332 06:34:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:43.332 06:34:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:43.332 06:34:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.332 06:34:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.332 06:34:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.332 06:34:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:43.332 06:34:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:43.332 06:34:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:43.332 06:34:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:43.332 06:34:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:43.332 06:34:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:43.332 06:34:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.332 06:34:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.332 06:34:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:43.332 06:34:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:43.332 06:34:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:43.332 06:34:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:43.332 06:34:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:43.332 06:34:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.332 06:34:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:43.332 06:34:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:43.332 06:34:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:43.332 06:34:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:43.332 06:34:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:43.332 06:34:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:43.332 Cannot find device "nvmf_tgt_br" 00:09:43.332 06:34:38 -- nvmf/common.sh@154 -- # true 00:09:43.332 06:34:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:43.332 Cannot find device "nvmf_tgt_br2" 00:09:43.332 06:34:38 -- nvmf/common.sh@155 -- # true 00:09:43.332 06:34:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:43.332 06:34:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:43.332 Cannot find device "nvmf_tgt_br" 00:09:43.332 06:34:38 -- nvmf/common.sh@157 -- # true 00:09:43.332 06:34:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:43.332 Cannot find device "nvmf_tgt_br2" 00:09:43.332 06:34:38 -- nvmf/common.sh@158 -- # true 00:09:43.332 06:34:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:43.591 06:34:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:43.591 06:34:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:43.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.591 06:34:38 -- nvmf/common.sh@161 -- # true 00:09:43.591 06:34:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:43.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.591 06:34:38 -- nvmf/common.sh@162 -- # true 00:09:43.591 06:34:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:43.591 06:34:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:43.591 06:34:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:43.591 06:34:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:43.592 06:34:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:43.592 06:34:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:43.592 06:34:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:43.592 06:34:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:43.592 06:34:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:43.592 06:34:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:43.592 06:34:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:43.592 06:34:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:43.592 06:34:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:43.592 06:34:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:43.592 06:34:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:43.592 06:34:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:43.592 06:34:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:43.592 06:34:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:43.592 06:34:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:43.592 06:34:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:43.592 06:34:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:43.592 06:34:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:43.592 06:34:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:43.592 06:34:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:43.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:09:43.592 00:09:43.592 --- 10.0.0.2 ping statistics --- 00:09:43.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.592 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:43.592 06:34:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:43.592 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:43.592 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:09:43.592 00:09:43.592 --- 10.0.0.3 ping statistics --- 00:09:43.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.592 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:43.592 06:34:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:43.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:43.592 00:09:43.592 --- 10.0.0.1 ping statistics --- 00:09:43.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.592 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:43.592 06:34:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.592 06:34:39 -- nvmf/common.sh@421 -- # return 0 00:09:43.592 06:34:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:43.592 06:34:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.592 06:34:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:43.592 06:34:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:43.592 06:34:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.592 06:34:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:43.592 06:34:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:43.592 06:34:39 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:43.592 06:34:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:43.592 06:34:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:43.592 06:34:39 -- common/autotest_common.sh@10 -- # set +x 00:09:43.851 06:34:39 -- nvmf/common.sh@469 -- # nvmfpid=73508 00:09:43.851 06:34:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:43.851 06:34:39 -- nvmf/common.sh@470 -- # waitforlisten 73508 00:09:43.851 06:34:39 -- common/autotest_common.sh@829 -- # '[' -z 73508 ']' 00:09:43.851 06:34:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.851 06:34:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:43.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.851 06:34:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.851 06:34:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:43.851 06:34:39 -- common/autotest_common.sh@10 -- # set +x 00:09:43.851 [2024-12-05 06:34:39.110982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:43.851 [2024-12-05 06:34:39.111105] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.851 [2024-12-05 06:34:39.252106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.851 [2024-12-05 06:34:39.285792] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:43.851 [2024-12-05 06:34:39.285963] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.851 [2024-12-05 06:34:39.285976] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.851 [2024-12-05 06:34:39.285985] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.851 [2024-12-05 06:34:39.286009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.789 06:34:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:44.789 06:34:40 -- common/autotest_common.sh@862 -- # return 0 00:09:44.789 06:34:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:44.789 06:34:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:44.789 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:09:44.789 06:34:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.789 06:34:40 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:44.789 06:34:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.789 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:09:44.789 [2024-12-05 06:34:40.112653] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.789 06:34:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.789 06:34:40 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:44.789 06:34:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.789 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:09:44.789 Malloc0 00:09:44.789 06:34:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.789 06:34:40 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:44.789 06:34:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.789 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:09:44.789 06:34:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.789 06:34:40 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:44.789 06:34:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.789 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:09:44.789 06:34:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.789 06:34:40 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.789 06:34:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.789 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:09:44.789 [2024-12-05 06:34:40.177501] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.789 06:34:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.789 06:34:40 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:44.789 06:34:40 -- target/queue_depth.sh@30 -- # bdevperf_pid=73545 00:09:44.789 06:34:40 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:44.789 06:34:40 -- target/queue_depth.sh@33 -- # waitforlisten 73545 /var/tmp/bdevperf.sock 00:09:44.789 06:34:40 -- common/autotest_common.sh@829 -- # '[' -z 73545 ']' 00:09:44.789 06:34:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:44.789 06:34:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:44.789 06:34:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:44.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:44.789 06:34:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:44.789 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:09:44.789 [2024-12-05 06:34:40.219126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:44.790 [2024-12-05 06:34:40.219230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73545 ] 00:09:45.049 [2024-12-05 06:34:40.354789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.049 [2024-12-05 06:34:40.394886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.986 06:34:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.986 06:34:41 -- common/autotest_common.sh@862 -- # return 0 00:09:45.986 06:34:41 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:45.986 06:34:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.986 06:34:41 -- common/autotest_common.sh@10 -- # set +x 00:09:45.986 NVMe0n1 00:09:45.986 06:34:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.986 06:34:41 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:45.986 Running I/O for 10 seconds... 00:09:55.989 00:09:55.989 Latency(us) 00:09:55.989 [2024-12-05T06:34:51.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.989 [2024-12-05T06:34:51.455Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:55.989 Verification LBA range: start 0x0 length 0x4000 00:09:55.989 NVMe0n1 : 10.06 15187.19 59.32 0.00 0.00 67183.97 14477.50 58148.31 00:09:55.989 [2024-12-05T06:34:51.455Z] =================================================================================================================== 00:09:55.989 [2024-12-05T06:34:51.455Z] Total : 15187.19 59.32 0.00 0.00 67183.97 14477.50 58148.31 00:09:55.989 0 00:09:55.989 06:34:51 -- target/queue_depth.sh@39 -- # killprocess 73545 00:09:55.989 06:34:51 -- common/autotest_common.sh@936 -- # '[' -z 73545 ']' 00:09:55.989 06:34:51 -- common/autotest_common.sh@940 -- # kill -0 73545 00:09:55.989 06:34:51 -- common/autotest_common.sh@941 -- # uname 00:09:55.989 06:34:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:55.989 06:34:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73545 00:09:55.989 06:34:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:55.989 06:34:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:55.989 06:34:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73545' 00:09:55.989 killing process with pid 73545 00:09:55.989 06:34:51 -- common/autotest_common.sh@955 -- # kill 73545 00:09:55.989 Received shutdown signal, test time was about 10.000000 seconds 00:09:55.989 00:09:55.989 Latency(us) 00:09:55.989 [2024-12-05T06:34:51.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.989 [2024-12-05T06:34:51.455Z] =================================================================================================================== 00:09:55.989 [2024-12-05T06:34:51.455Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:55.989 06:34:51 -- common/autotest_common.sh@960 -- # wait 73545 00:09:56.247 06:34:51 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:56.247 06:34:51 -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:56.247 06:34:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:56.247 06:34:51 -- nvmf/common.sh@116 -- # sync 00:09:56.247 06:34:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:56.247 06:34:51 -- nvmf/common.sh@119 -- # set +e 00:09:56.247 06:34:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:56.247 06:34:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:56.247 rmmod nvme_tcp 00:09:56.247 rmmod nvme_fabrics 00:09:56.247 rmmod nvme_keyring 00:09:56.247 06:34:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:56.247 06:34:51 -- nvmf/common.sh@123 -- # set -e 00:09:56.247 06:34:51 -- nvmf/common.sh@124 -- # return 0 00:09:56.247 06:34:51 -- nvmf/common.sh@477 -- # '[' -n 73508 ']' 00:09:56.247 06:34:51 -- nvmf/common.sh@478 -- # killprocess 73508 00:09:56.247 06:34:51 -- common/autotest_common.sh@936 -- # '[' -z 73508 ']' 00:09:56.247 06:34:51 -- common/autotest_common.sh@940 -- # kill -0 73508 00:09:56.247 06:34:51 -- common/autotest_common.sh@941 -- # uname 00:09:56.247 06:34:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:56.247 06:34:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73508 00:09:56.504 06:34:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:56.504 06:34:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:56.504 killing process with pid 73508 00:09:56.504 06:34:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73508' 00:09:56.504 06:34:51 -- common/autotest_common.sh@955 -- # kill 73508 00:09:56.504 06:34:51 -- common/autotest_common.sh@960 -- # wait 73508 00:09:56.504 06:34:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:56.504 06:34:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:56.504 06:34:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:56.504 06:34:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.504 06:34:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:56.504 06:34:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.504 06:34:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.504 06:34:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.504 06:34:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:56.504 00:09:56.504 real 0m13.389s 00:09:56.504 user 0m23.218s 00:09:56.504 sys 0m1.984s 00:09:56.504 06:34:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:56.504 06:34:51 -- common/autotest_common.sh@10 -- # set +x 00:09:56.504 ************************************ 00:09:56.504 END TEST nvmf_queue_depth 00:09:56.504 ************************************ 00:09:56.504 06:34:51 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:56.504 06:34:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:56.504 06:34:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.504 06:34:51 -- common/autotest_common.sh@10 -- # set +x 00:09:56.504 ************************************ 00:09:56.504 START TEST nvmf_multipath 00:09:56.504 ************************************ 00:09:56.504 06:34:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:56.762 * Looking for test storage... 00:09:56.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:56.762 06:34:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:56.762 06:34:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:56.762 06:34:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:56.762 06:34:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:56.762 06:34:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:56.762 06:34:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:56.762 06:34:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:56.762 06:34:52 -- scripts/common.sh@335 -- # IFS=.-: 00:09:56.762 06:34:52 -- scripts/common.sh@335 -- # read -ra ver1 00:09:56.762 06:34:52 -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.762 06:34:52 -- scripts/common.sh@336 -- # read -ra ver2 00:09:56.762 06:34:52 -- scripts/common.sh@337 -- # local 'op=<' 00:09:56.762 06:34:52 -- scripts/common.sh@339 -- # ver1_l=2 00:09:56.762 06:34:52 -- scripts/common.sh@340 -- # ver2_l=1 00:09:56.762 06:34:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:56.762 06:34:52 -- scripts/common.sh@343 -- # case "$op" in 00:09:56.762 06:34:52 -- scripts/common.sh@344 -- # : 1 00:09:56.762 06:34:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:56.762 06:34:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.762 06:34:52 -- scripts/common.sh@364 -- # decimal 1 00:09:56.762 06:34:52 -- scripts/common.sh@352 -- # local d=1 00:09:56.762 06:34:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.762 06:34:52 -- scripts/common.sh@354 -- # echo 1 00:09:56.762 06:34:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:56.762 06:34:52 -- scripts/common.sh@365 -- # decimal 2 00:09:56.762 06:34:52 -- scripts/common.sh@352 -- # local d=2 00:09:56.762 06:34:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.762 06:34:52 -- scripts/common.sh@354 -- # echo 2 00:09:56.762 06:34:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:56.762 06:34:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:56.762 06:34:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:56.762 06:34:52 -- scripts/common.sh@367 -- # return 0 00:09:56.762 06:34:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.762 06:34:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:56.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.762 --rc genhtml_branch_coverage=1 00:09:56.762 --rc genhtml_function_coverage=1 00:09:56.762 --rc genhtml_legend=1 00:09:56.762 --rc geninfo_all_blocks=1 00:09:56.762 --rc geninfo_unexecuted_blocks=1 00:09:56.762 00:09:56.762 ' 00:09:56.762 06:34:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:56.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.762 --rc genhtml_branch_coverage=1 00:09:56.762 --rc genhtml_function_coverage=1 00:09:56.762 --rc genhtml_legend=1 00:09:56.762 --rc geninfo_all_blocks=1 00:09:56.762 --rc geninfo_unexecuted_blocks=1 00:09:56.762 00:09:56.762 ' 00:09:56.762 06:34:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:56.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.762 --rc genhtml_branch_coverage=1 00:09:56.762 --rc genhtml_function_coverage=1 00:09:56.762 --rc genhtml_legend=1 00:09:56.762 --rc geninfo_all_blocks=1 00:09:56.762 --rc geninfo_unexecuted_blocks=1 00:09:56.762 00:09:56.762 ' 00:09:56.762 06:34:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:56.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.762 --rc genhtml_branch_coverage=1 00:09:56.762 --rc genhtml_function_coverage=1 00:09:56.762 --rc genhtml_legend=1 00:09:56.762 --rc geninfo_all_blocks=1 00:09:56.762 --rc geninfo_unexecuted_blocks=1 00:09:56.762 00:09:56.762 ' 00:09:56.762 06:34:52 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:56.762 06:34:52 -- nvmf/common.sh@7 -- # uname -s 00:09:56.762 06:34:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.762 06:34:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.762 06:34:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.762 06:34:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.762 06:34:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.762 06:34:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.762 06:34:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.763 06:34:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.763 06:34:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.763 06:34:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.763 06:34:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:09:56.763 06:34:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:09:56.763 06:34:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.763 06:34:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.763 06:34:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:56.763 06:34:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.763 06:34:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.763 06:34:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.763 06:34:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.763 06:34:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.763 06:34:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.763 06:34:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.763 06:34:52 -- paths/export.sh@5 -- # export PATH 00:09:56.763 06:34:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.763 06:34:52 -- nvmf/common.sh@46 -- # : 0 00:09:56.763 06:34:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:56.763 06:34:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:56.763 06:34:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:56.763 06:34:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.763 06:34:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.763 06:34:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:56.763 06:34:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:56.763 06:34:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:56.763 06:34:52 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.763 06:34:52 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.763 06:34:52 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:56.763 06:34:52 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:56.763 06:34:52 -- target/multipath.sh@43 -- # nvmftestinit 00:09:56.763 06:34:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:56.763 06:34:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.763 06:34:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:56.763 06:34:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:56.763 06:34:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:56.763 06:34:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.763 06:34:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.763 06:34:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.763 06:34:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:56.763 06:34:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:56.763 06:34:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:56.763 06:34:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:56.763 06:34:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:56.763 06:34:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:56.763 06:34:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.763 06:34:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.763 06:34:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:56.763 06:34:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:56.763 06:34:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:56.763 06:34:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:56.763 06:34:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:56.763 06:34:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.763 06:34:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:56.763 06:34:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:56.763 06:34:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:56.763 06:34:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:56.763 06:34:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:56.763 06:34:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:56.763 Cannot find device "nvmf_tgt_br" 00:09:56.763 06:34:52 -- nvmf/common.sh@154 -- # true 00:09:56.763 06:34:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:56.763 Cannot find device "nvmf_tgt_br2" 00:09:56.763 06:34:52 -- nvmf/common.sh@155 -- # true 00:09:56.763 06:34:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:56.763 06:34:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:57.022 Cannot find device "nvmf_tgt_br" 00:09:57.022 06:34:52 -- nvmf/common.sh@157 -- # true 00:09:57.022 06:34:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:57.022 Cannot find device "nvmf_tgt_br2" 00:09:57.022 06:34:52 -- nvmf/common.sh@158 -- # true 00:09:57.022 06:34:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:57.022 06:34:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:57.022 06:34:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.022 06:34:52 -- nvmf/common.sh@161 -- # true 00:09:57.022 06:34:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.022 06:34:52 -- nvmf/common.sh@162 -- # true 00:09:57.022 06:34:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.022 06:34:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.022 06:34:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.022 06:34:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.022 06:34:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.022 06:34:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.022 06:34:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.022 06:34:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:57.022 06:34:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:57.022 06:34:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:57.022 06:34:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:57.022 06:34:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:57.022 06:34:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:57.022 06:34:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:57.022 06:34:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:57.022 06:34:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:57.022 06:34:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:57.022 06:34:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:57.022 06:34:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:57.022 06:34:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:57.022 06:34:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:57.022 06:34:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:57.022 06:34:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:57.280 06:34:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:57.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:09:57.280 00:09:57.280 --- 10.0.0.2 ping statistics --- 00:09:57.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.280 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:09:57.280 06:34:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:57.280 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:57.280 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:09:57.280 00:09:57.280 --- 10.0.0.3 ping statistics --- 00:09:57.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.280 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:09:57.280 06:34:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:57.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:09:57.280 00:09:57.280 --- 10.0.0.1 ping statistics --- 00:09:57.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.280 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:57.280 06:34:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.280 06:34:52 -- nvmf/common.sh@421 -- # return 0 00:09:57.280 06:34:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:57.280 06:34:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.280 06:34:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:57.280 06:34:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:57.280 06:34:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.280 06:34:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:57.280 06:34:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:57.280 06:34:52 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:57.280 06:34:52 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:57.280 06:34:52 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:57.280 06:34:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:57.280 06:34:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:57.280 06:34:52 -- common/autotest_common.sh@10 -- # set +x 00:09:57.280 06:34:52 -- nvmf/common.sh@469 -- # nvmfpid=73868 00:09:57.280 06:34:52 -- nvmf/common.sh@470 -- # waitforlisten 73868 00:09:57.280 06:34:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:57.280 06:34:52 -- common/autotest_common.sh@829 -- # '[' -z 73868 ']' 00:09:57.280 06:34:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.280 06:34:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.280 06:34:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.280 06:34:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.280 06:34:52 -- common/autotest_common.sh@10 -- # set +x 00:09:57.280 [2024-12-05 06:34:52.576953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:57.280 [2024-12-05 06:34:52.577039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.280 [2024-12-05 06:34:52.712430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.538 [2024-12-05 06:34:52.745535] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:57.538 [2024-12-05 06:34:52.745702] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.538 [2024-12-05 06:34:52.745714] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.538 [2024-12-05 06:34:52.745723] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.538 [2024-12-05 06:34:52.745782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.539 [2024-12-05 06:34:52.746519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.539 [2024-12-05 06:34:52.746582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.539 [2024-12-05 06:34:52.746588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.106 06:34:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.106 06:34:53 -- common/autotest_common.sh@862 -- # return 0 00:09:58.106 06:34:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:58.106 06:34:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:58.106 06:34:53 -- common/autotest_common.sh@10 -- # set +x 00:09:58.365 06:34:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.365 06:34:53 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:58.365 [2024-12-05 06:34:53.805538] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.624 06:34:53 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:58.624 Malloc0 00:09:58.883 06:34:54 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:58.883 06:34:54 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.141 06:34:54 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.400 [2024-12-05 06:34:54.822998] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.400 06:34:54 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:59.659 [2024-12-05 06:34:55.067201] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:59.659 06:34:55 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:59.917 06:34:55 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:59.917 06:34:55 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:59.917 06:34:55 -- common/autotest_common.sh@1187 -- # local i=0 00:09:59.917 06:34:55 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:59.917 06:34:55 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:09:59.917 06:34:55 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:02.487 06:34:57 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:02.487 06:34:57 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:02.487 06:34:57 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.487 06:34:57 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:02.487 06:34:57 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.487 06:34:57 -- common/autotest_common.sh@1197 -- # return 0 00:10:02.487 06:34:57 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:02.487 06:34:57 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:02.487 06:34:57 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:02.487 06:34:57 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:02.487 06:34:57 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:02.487 06:34:57 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:02.487 06:34:57 -- target/multipath.sh@38 -- # return 0 00:10:02.487 06:34:57 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:02.487 06:34:57 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:02.487 06:34:57 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:02.487 06:34:57 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:02.487 06:34:57 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:02.487 06:34:57 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:02.487 06:34:57 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:02.487 06:34:57 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:02.487 06:34:57 -- target/multipath.sh@22 -- # local timeout=20 00:10:02.487 06:34:57 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:02.487 06:34:57 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:02.487 06:34:57 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:02.487 06:34:57 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:02.487 06:34:57 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:02.487 06:34:57 -- target/multipath.sh@22 -- # local timeout=20 00:10:02.487 06:34:57 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:02.487 06:34:57 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:02.487 06:34:57 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:02.487 06:34:57 -- target/multipath.sh@85 -- # echo numa 00:10:02.487 06:34:57 -- target/multipath.sh@88 -- # fio_pid=73959 00:10:02.487 06:34:57 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:02.487 06:34:57 -- target/multipath.sh@90 -- # sleep 1 00:10:02.487 [global] 00:10:02.487 thread=1 00:10:02.487 invalidate=1 00:10:02.487 rw=randrw 00:10:02.487 time_based=1 00:10:02.487 runtime=6 00:10:02.487 ioengine=libaio 00:10:02.487 direct=1 00:10:02.487 bs=4096 00:10:02.487 iodepth=128 00:10:02.487 norandommap=0 00:10:02.487 numjobs=1 00:10:02.487 00:10:02.487 verify_dump=1 00:10:02.487 verify_backlog=512 00:10:02.487 verify_state_save=0 00:10:02.487 do_verify=1 00:10:02.487 verify=crc32c-intel 00:10:02.487 [job0] 00:10:02.487 filename=/dev/nvme0n1 00:10:02.487 Could not set queue depth (nvme0n1) 00:10:02.487 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.487 fio-3.35 00:10:02.487 Starting 1 thread 00:10:03.055 06:34:58 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:03.313 06:34:58 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:03.571 06:34:58 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:03.571 06:34:58 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:03.571 06:34:58 -- target/multipath.sh@22 -- # local timeout=20 00:10:03.571 06:34:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:03.571 06:34:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:03.571 06:34:58 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:03.571 06:34:58 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:03.571 06:34:58 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:03.571 06:34:58 -- target/multipath.sh@22 -- # local timeout=20 00:10:03.571 06:34:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:03.571 06:34:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:03.571 06:34:58 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:03.571 06:34:58 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:03.829 06:34:59 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:04.087 06:34:59 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:04.087 06:34:59 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:04.087 06:34:59 -- target/multipath.sh@22 -- # local timeout=20 00:10:04.087 06:34:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:04.087 06:34:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:04.087 06:34:59 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:04.087 06:34:59 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:04.087 06:34:59 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:04.087 06:34:59 -- target/multipath.sh@22 -- # local timeout=20 00:10:04.087 06:34:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:04.087 06:34:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:04.087 06:34:59 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:04.087 06:34:59 -- target/multipath.sh@104 -- # wait 73959 00:10:08.297 00:10:08.297 job0: (groupid=0, jobs=1): err= 0: pid=73986: Thu Dec 5 06:35:03 2024 00:10:08.297 read: IOPS=10.9k, BW=42.7MiB/s (44.8MB/s)(257MiB/6007msec) 00:10:08.297 slat (usec): min=4, max=5659, avg=53.81, stdev=225.58 00:10:08.297 clat (usec): min=1194, max=13999, avg=7963.96, stdev=1417.87 00:10:08.297 lat (usec): min=1223, max=14523, avg=8017.77, stdev=1422.29 00:10:08.297 clat percentiles (usec): 00:10:08.297 | 1.00th=[ 4146], 5.00th=[ 5932], 10.00th=[ 6652], 20.00th=[ 7111], 00:10:08.297 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 8094], 00:10:08.297 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[11207], 00:10:08.297 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13042], 99.95th=[13304], 00:10:08.297 | 99.99th=[13829] 00:10:08.297 bw ( KiB/s): min= 9984, max=26784, per=51.20%, avg=22412.36, stdev=5464.81, samples=11 00:10:08.297 iops : min= 2496, max= 6696, avg=5603.09, stdev=1366.20, samples=11 00:10:08.297 write: IOPS=6262, BW=24.5MiB/s (25.7MB/s)(133MiB/5426msec); 0 zone resets 00:10:08.297 slat (usec): min=12, max=1894, avg=62.75, stdev=154.02 00:10:08.297 clat (usec): min=797, max=13725, avg=7017.61, stdev=1295.42 00:10:08.297 lat (usec): min=875, max=13754, avg=7080.36, stdev=1301.22 00:10:08.297 clat percentiles (usec): 00:10:08.297 | 1.00th=[ 3195], 5.00th=[ 4080], 10.00th=[ 5276], 20.00th=[ 6521], 00:10:08.297 | 30.00th=[ 6783], 40.00th=[ 7046], 50.00th=[ 7242], 60.00th=[ 7373], 00:10:08.297 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8094], 95.00th=[ 8356], 00:10:08.297 | 99.00th=[10945], 99.50th=[11600], 99.90th=[12649], 99.95th=[12911], 00:10:08.297 | 99.99th=[13435] 00:10:08.297 bw ( KiB/s): min=10336, max=26232, per=89.47%, avg=22413.09, stdev=5211.11, samples=11 00:10:08.297 iops : min= 2584, max= 6558, avg=5603.27, stdev=1302.78, samples=11 00:10:08.297 lat (usec) : 1000=0.01% 00:10:08.297 lat (msec) : 2=0.03%, 4=2.05%, 10=92.21%, 20=5.70% 00:10:08.297 cpu : usr=5.59%, sys=21.60%, ctx=5591, majf=0, minf=78 00:10:08.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:08.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.297 issued rwts: total=65737,33983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.297 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.297 00:10:08.297 Run status group 0 (all jobs): 00:10:08.297 READ: bw=42.7MiB/s (44.8MB/s), 42.7MiB/s-42.7MiB/s (44.8MB/s-44.8MB/s), io=257MiB (269MB), run=6007-6007msec 00:10:08.297 WRITE: bw=24.5MiB/s (25.7MB/s), 24.5MiB/s-24.5MiB/s (25.7MB/s-25.7MB/s), io=133MiB (139MB), run=5426-5426msec 00:10:08.297 00:10:08.297 Disk stats (read/write): 00:10:08.297 nvme0n1: ios=64783/33346, merge=0/0, ticks=495131/220113, in_queue=715244, util=98.60% 00:10:08.297 06:35:03 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:08.866 06:35:04 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:08.866 06:35:04 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:08.866 06:35:04 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:08.866 06:35:04 -- target/multipath.sh@22 -- # local timeout=20 00:10:08.866 06:35:04 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:08.866 06:35:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:08.866 06:35:04 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:08.866 06:35:04 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:08.866 06:35:04 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:08.866 06:35:04 -- target/multipath.sh@22 -- # local timeout=20 00:10:08.866 06:35:04 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:08.866 06:35:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:08.866 06:35:04 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:08.866 06:35:04 -- target/multipath.sh@113 -- # echo round-robin 00:10:08.866 06:35:04 -- target/multipath.sh@116 -- # fio_pid=74062 00:10:08.866 06:35:04 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:08.866 06:35:04 -- target/multipath.sh@118 -- # sleep 1 00:10:08.866 [global] 00:10:08.866 thread=1 00:10:08.866 invalidate=1 00:10:08.866 rw=randrw 00:10:08.866 time_based=1 00:10:08.866 runtime=6 00:10:08.866 ioengine=libaio 00:10:08.866 direct=1 00:10:08.866 bs=4096 00:10:08.866 iodepth=128 00:10:08.866 norandommap=0 00:10:08.866 numjobs=1 00:10:08.866 00:10:08.866 verify_dump=1 00:10:08.866 verify_backlog=512 00:10:08.866 verify_state_save=0 00:10:08.866 do_verify=1 00:10:08.866 verify=crc32c-intel 00:10:08.866 [job0] 00:10:08.866 filename=/dev/nvme0n1 00:10:09.126 Could not set queue depth (nvme0n1) 00:10:09.126 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:09.126 fio-3.35 00:10:09.126 Starting 1 thread 00:10:10.062 06:35:05 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:10.320 06:35:05 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:10.578 06:35:05 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:10.578 06:35:05 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:10.578 06:35:05 -- target/multipath.sh@22 -- # local timeout=20 00:10:10.579 06:35:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:10.579 06:35:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:10.579 06:35:05 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:10.579 06:35:05 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:10.579 06:35:05 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:10.579 06:35:05 -- target/multipath.sh@22 -- # local timeout=20 00:10:10.579 06:35:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:10.579 06:35:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:10.579 06:35:05 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:10.579 06:35:05 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:10.838 06:35:06 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:11.097 06:35:06 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:11.097 06:35:06 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:11.097 06:35:06 -- target/multipath.sh@22 -- # local timeout=20 00:10:11.097 06:35:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:11.097 06:35:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:11.097 06:35:06 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:11.097 06:35:06 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:11.097 06:35:06 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:11.097 06:35:06 -- target/multipath.sh@22 -- # local timeout=20 00:10:11.097 06:35:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:11.097 06:35:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:11.097 06:35:06 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:11.097 06:35:06 -- target/multipath.sh@132 -- # wait 74062 00:10:15.290 00:10:15.290 job0: (groupid=0, jobs=1): err= 0: pid=74083: Thu Dec 5 06:35:10 2024 00:10:15.290 read: IOPS=12.2k, BW=47.5MiB/s (49.8MB/s)(285MiB/6003msec) 00:10:15.290 slat (usec): min=4, max=5861, avg=41.76, stdev=197.15 00:10:15.290 clat (usec): min=367, max=15333, avg=7265.79, stdev=1812.99 00:10:15.290 lat (usec): min=379, max=15368, avg=7307.55, stdev=1826.50 00:10:15.290 clat percentiles (usec): 00:10:15.290 | 1.00th=[ 3032], 5.00th=[ 4015], 10.00th=[ 4752], 20.00th=[ 5735], 00:10:15.290 | 30.00th=[ 6718], 40.00th=[ 7177], 50.00th=[ 7504], 60.00th=[ 7767], 00:10:15.290 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[10552], 00:10:15.290 | 99.00th=[12125], 99.50th=[12387], 99.90th=[12911], 99.95th=[13042], 00:10:15.290 | 99.99th=[13698] 00:10:15.290 bw ( KiB/s): min= 9664, max=41728, per=52.41%, avg=25512.73, stdev=8569.70, samples=11 00:10:15.290 iops : min= 2416, max=10432, avg=6378.18, stdev=2142.43, samples=11 00:10:15.290 write: IOPS=6992, BW=27.3MiB/s (28.6MB/s)(146MiB/5341msec); 0 zone resets 00:10:15.290 slat (usec): min=14, max=2849, avg=51.60, stdev=130.52 00:10:15.290 clat (usec): min=1064, max=13638, avg=6194.09, stdev=1734.00 00:10:15.290 lat (usec): min=1104, max=13661, avg=6245.68, stdev=1748.00 00:10:15.290 clat percentiles (usec): 00:10:15.290 | 1.00th=[ 2606], 5.00th=[ 3228], 10.00th=[ 3654], 20.00th=[ 4293], 00:10:15.290 | 30.00th=[ 5014], 40.00th=[ 6259], 50.00th=[ 6783], 60.00th=[ 7046], 00:10:15.290 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 7963], 95.00th=[ 8225], 00:10:15.290 | 99.00th=[10421], 99.50th=[11076], 99.90th=[12256], 99.95th=[12649], 00:10:15.290 | 99.99th=[13435] 00:10:15.290 bw ( KiB/s): min= 9968, max=40504, per=91.13%, avg=25490.18, stdev=8286.76, samples=11 00:10:15.290 iops : min= 2492, max=10126, avg=6372.55, stdev=2071.69, samples=11 00:10:15.290 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:15.290 lat (msec) : 2=0.19%, 4=8.37%, 10=87.06%, 20=4.36% 00:10:15.290 cpu : usr=6.18%, sys=22.71%, ctx=5970, majf=0, minf=90 00:10:15.290 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:15.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.290 issued rwts: total=73058,37348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.290 00:10:15.290 Run status group 0 (all jobs): 00:10:15.290 READ: bw=47.5MiB/s (49.8MB/s), 47.5MiB/s-47.5MiB/s (49.8MB/s-49.8MB/s), io=285MiB (299MB), run=6003-6003msec 00:10:15.290 WRITE: bw=27.3MiB/s (28.6MB/s), 27.3MiB/s-27.3MiB/s (28.6MB/s-28.6MB/s), io=146MiB (153MB), run=5341-5341msec 00:10:15.290 00:10:15.290 Disk stats (read/write): 00:10:15.290 nvme0n1: ios=71641/37313, merge=0/0, ticks=496862/215872, in_queue=712734, util=98.66% 00:10:15.290 06:35:10 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:15.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:15.290 06:35:10 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:15.290 06:35:10 -- common/autotest_common.sh@1208 -- # local i=0 00:10:15.290 06:35:10 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:15.290 06:35:10 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.290 06:35:10 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:15.290 06:35:10 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.290 06:35:10 -- common/autotest_common.sh@1220 -- # return 0 00:10:15.290 06:35:10 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:15.549 06:35:10 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:15.549 06:35:10 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:15.549 06:35:11 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:15.549 06:35:11 -- target/multipath.sh@144 -- # nvmftestfini 00:10:15.549 06:35:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:15.549 06:35:11 -- nvmf/common.sh@116 -- # sync 00:10:15.809 06:35:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:15.809 06:35:11 -- nvmf/common.sh@119 -- # set +e 00:10:15.809 06:35:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:15.809 06:35:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:15.809 rmmod nvme_tcp 00:10:15.809 rmmod nvme_fabrics 00:10:15.809 rmmod nvme_keyring 00:10:15.809 06:35:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:15.809 06:35:11 -- nvmf/common.sh@123 -- # set -e 00:10:15.809 06:35:11 -- nvmf/common.sh@124 -- # return 0 00:10:15.809 06:35:11 -- nvmf/common.sh@477 -- # '[' -n 73868 ']' 00:10:15.809 06:35:11 -- nvmf/common.sh@478 -- # killprocess 73868 00:10:15.809 06:35:11 -- common/autotest_common.sh@936 -- # '[' -z 73868 ']' 00:10:15.809 06:35:11 -- common/autotest_common.sh@940 -- # kill -0 73868 00:10:15.809 06:35:11 -- common/autotest_common.sh@941 -- # uname 00:10:15.809 06:35:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:15.809 06:35:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73868 00:10:15.809 killing process with pid 73868 00:10:15.809 06:35:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:15.809 06:35:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:15.809 06:35:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73868' 00:10:15.809 06:35:11 -- common/autotest_common.sh@955 -- # kill 73868 00:10:15.809 06:35:11 -- common/autotest_common.sh@960 -- # wait 73868 00:10:16.068 06:35:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:16.068 06:35:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:16.068 06:35:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:16.068 06:35:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:16.068 06:35:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:16.068 06:35:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.068 06:35:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:16.068 06:35:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.068 06:35:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:16.068 00:10:16.068 real 0m19.380s 00:10:16.068 user 1m12.522s 00:10:16.068 sys 0m10.065s 00:10:16.068 06:35:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:16.068 06:35:11 -- common/autotest_common.sh@10 -- # set +x 00:10:16.068 ************************************ 00:10:16.068 END TEST nvmf_multipath 00:10:16.068 ************************************ 00:10:16.068 06:35:11 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:16.068 06:35:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:16.068 06:35:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:16.068 06:35:11 -- common/autotest_common.sh@10 -- # set +x 00:10:16.068 ************************************ 00:10:16.068 START TEST nvmf_zcopy 00:10:16.068 ************************************ 00:10:16.068 06:35:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:16.068 * Looking for test storage... 00:10:16.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:16.069 06:35:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:16.069 06:35:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:16.069 06:35:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:16.327 06:35:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:16.327 06:35:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:16.327 06:35:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:16.327 06:35:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:16.327 06:35:11 -- scripts/common.sh@335 -- # IFS=.-: 00:10:16.327 06:35:11 -- scripts/common.sh@335 -- # read -ra ver1 00:10:16.327 06:35:11 -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.327 06:35:11 -- scripts/common.sh@336 -- # read -ra ver2 00:10:16.327 06:35:11 -- scripts/common.sh@337 -- # local 'op=<' 00:10:16.327 06:35:11 -- scripts/common.sh@339 -- # ver1_l=2 00:10:16.327 06:35:11 -- scripts/common.sh@340 -- # ver2_l=1 00:10:16.327 06:35:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:16.327 06:35:11 -- scripts/common.sh@343 -- # case "$op" in 00:10:16.327 06:35:11 -- scripts/common.sh@344 -- # : 1 00:10:16.327 06:35:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:16.327 06:35:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.327 06:35:11 -- scripts/common.sh@364 -- # decimal 1 00:10:16.327 06:35:11 -- scripts/common.sh@352 -- # local d=1 00:10:16.327 06:35:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.327 06:35:11 -- scripts/common.sh@354 -- # echo 1 00:10:16.327 06:35:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:16.327 06:35:11 -- scripts/common.sh@365 -- # decimal 2 00:10:16.327 06:35:11 -- scripts/common.sh@352 -- # local d=2 00:10:16.327 06:35:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.327 06:35:11 -- scripts/common.sh@354 -- # echo 2 00:10:16.327 06:35:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:16.327 06:35:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:16.327 06:35:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:16.327 06:35:11 -- scripts/common.sh@367 -- # return 0 00:10:16.327 06:35:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.327 06:35:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:16.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.327 --rc genhtml_branch_coverage=1 00:10:16.327 --rc genhtml_function_coverage=1 00:10:16.327 --rc genhtml_legend=1 00:10:16.327 --rc geninfo_all_blocks=1 00:10:16.327 --rc geninfo_unexecuted_blocks=1 00:10:16.327 00:10:16.327 ' 00:10:16.327 06:35:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:16.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.327 --rc genhtml_branch_coverage=1 00:10:16.327 --rc genhtml_function_coverage=1 00:10:16.327 --rc genhtml_legend=1 00:10:16.327 --rc geninfo_all_blocks=1 00:10:16.327 --rc geninfo_unexecuted_blocks=1 00:10:16.328 00:10:16.328 ' 00:10:16.328 06:35:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:16.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.328 --rc genhtml_branch_coverage=1 00:10:16.328 --rc genhtml_function_coverage=1 00:10:16.328 --rc genhtml_legend=1 00:10:16.328 --rc geninfo_all_blocks=1 00:10:16.328 --rc geninfo_unexecuted_blocks=1 00:10:16.328 00:10:16.328 ' 00:10:16.328 06:35:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:16.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.328 --rc genhtml_branch_coverage=1 00:10:16.328 --rc genhtml_function_coverage=1 00:10:16.328 --rc genhtml_legend=1 00:10:16.328 --rc geninfo_all_blocks=1 00:10:16.328 --rc geninfo_unexecuted_blocks=1 00:10:16.328 00:10:16.328 ' 00:10:16.328 06:35:11 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:16.328 06:35:11 -- nvmf/common.sh@7 -- # uname -s 00:10:16.328 06:35:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.328 06:35:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.328 06:35:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.328 06:35:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.328 06:35:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.328 06:35:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.328 06:35:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.328 06:35:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.328 06:35:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.328 06:35:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.328 06:35:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:10:16.328 06:35:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:10:16.328 06:35:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.328 06:35:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.328 06:35:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:16.328 06:35:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:16.328 06:35:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.328 06:35:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.328 06:35:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.328 06:35:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.328 06:35:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.328 06:35:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.328 06:35:11 -- paths/export.sh@5 -- # export PATH 00:10:16.328 06:35:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.328 06:35:11 -- nvmf/common.sh@46 -- # : 0 00:10:16.328 06:35:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:16.328 06:35:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:16.328 06:35:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:16.328 06:35:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.328 06:35:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.328 06:35:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:16.328 06:35:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:16.328 06:35:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:16.328 06:35:11 -- target/zcopy.sh@12 -- # nvmftestinit 00:10:16.328 06:35:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:16.328 06:35:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.328 06:35:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:16.328 06:35:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:16.328 06:35:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:16.328 06:35:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.328 06:35:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:16.328 06:35:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.328 06:35:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:16.328 06:35:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:16.328 06:35:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:16.328 06:35:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:16.328 06:35:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:16.328 06:35:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:16.328 06:35:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.328 06:35:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.328 06:35:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:16.328 06:35:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:16.328 06:35:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:16.328 06:35:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:16.328 06:35:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:16.328 06:35:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.328 06:35:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:16.328 06:35:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:16.328 06:35:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:16.328 06:35:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:16.328 06:35:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:16.328 06:35:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:16.328 Cannot find device "nvmf_tgt_br" 00:10:16.328 06:35:11 -- nvmf/common.sh@154 -- # true 00:10:16.328 06:35:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:16.328 Cannot find device "nvmf_tgt_br2" 00:10:16.328 06:35:11 -- nvmf/common.sh@155 -- # true 00:10:16.328 06:35:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:16.328 06:35:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:16.328 Cannot find device "nvmf_tgt_br" 00:10:16.328 06:35:11 -- nvmf/common.sh@157 -- # true 00:10:16.328 06:35:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:16.328 Cannot find device "nvmf_tgt_br2" 00:10:16.328 06:35:11 -- nvmf/common.sh@158 -- # true 00:10:16.328 06:35:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:16.328 06:35:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:16.328 06:35:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:16.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.328 06:35:11 -- nvmf/common.sh@161 -- # true 00:10:16.328 06:35:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:16.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.328 06:35:11 -- nvmf/common.sh@162 -- # true 00:10:16.328 06:35:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:16.328 06:35:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:16.328 06:35:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:16.328 06:35:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:16.328 06:35:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:16.328 06:35:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:16.588 06:35:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:16.588 06:35:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:16.588 06:35:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:16.588 06:35:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:16.588 06:35:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:16.588 06:35:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:16.588 06:35:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:16.588 06:35:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:16.588 06:35:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:16.588 06:35:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:16.588 06:35:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:16.588 06:35:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:16.588 06:35:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:16.588 06:35:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:16.588 06:35:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:16.588 06:35:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:16.588 06:35:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:16.588 06:35:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:16.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:10:16.588 00:10:16.588 --- 10.0.0.2 ping statistics --- 00:10:16.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.588 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:16.588 06:35:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:16.588 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:16.588 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:10:16.588 00:10:16.588 --- 10.0.0.3 ping statistics --- 00:10:16.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.588 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:16.588 06:35:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:16.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:16.588 00:10:16.588 --- 10.0.0.1 ping statistics --- 00:10:16.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.588 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:16.588 06:35:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.588 06:35:11 -- nvmf/common.sh@421 -- # return 0 00:10:16.588 06:35:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:16.588 06:35:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.588 06:35:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:16.588 06:35:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:16.588 06:35:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.588 06:35:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:16.588 06:35:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:16.588 06:35:11 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:16.588 06:35:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:16.588 06:35:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:16.588 06:35:11 -- common/autotest_common.sh@10 -- # set +x 00:10:16.588 06:35:11 -- nvmf/common.sh@469 -- # nvmfpid=74343 00:10:16.588 06:35:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:16.588 06:35:11 -- nvmf/common.sh@470 -- # waitforlisten 74343 00:10:16.588 06:35:11 -- common/autotest_common.sh@829 -- # '[' -z 74343 ']' 00:10:16.588 06:35:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.588 06:35:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.588 06:35:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.588 06:35:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.588 06:35:11 -- common/autotest_common.sh@10 -- # set +x 00:10:16.588 [2024-12-05 06:35:11.983425] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:16.588 [2024-12-05 06:35:11.983688] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.847 [2024-12-05 06:35:12.116855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.847 [2024-12-05 06:35:12.149905] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:16.847 [2024-12-05 06:35:12.150307] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.847 [2024-12-05 06:35:12.150339] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.847 [2024-12-05 06:35:12.150351] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.847 [2024-12-05 06:35:12.150379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.847 06:35:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.847 06:35:12 -- common/autotest_common.sh@862 -- # return 0 00:10:16.847 06:35:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:16.847 06:35:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:16.847 06:35:12 -- common/autotest_common.sh@10 -- # set +x 00:10:16.847 06:35:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.847 06:35:12 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:16.847 06:35:12 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:16.847 06:35:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.847 06:35:12 -- common/autotest_common.sh@10 -- # set +x 00:10:16.847 [2024-12-05 06:35:12.282869] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.847 06:35:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.847 06:35:12 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:16.847 06:35:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.847 06:35:12 -- common/autotest_common.sh@10 -- # set +x 00:10:16.847 06:35:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.847 06:35:12 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.847 06:35:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.847 06:35:12 -- common/autotest_common.sh@10 -- # set +x 00:10:16.847 [2024-12-05 06:35:12.298961] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.847 06:35:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.847 06:35:12 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:16.847 06:35:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.847 06:35:12 -- common/autotest_common.sh@10 -- # set +x 00:10:17.106 06:35:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.106 06:35:12 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:17.106 06:35:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.106 06:35:12 -- common/autotest_common.sh@10 -- # set +x 00:10:17.106 malloc0 00:10:17.106 06:35:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.106 06:35:12 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:17.106 06:35:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.106 06:35:12 -- common/autotest_common.sh@10 -- # set +x 00:10:17.106 06:35:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.106 06:35:12 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:17.106 06:35:12 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:17.106 06:35:12 -- nvmf/common.sh@520 -- # config=() 00:10:17.106 06:35:12 -- nvmf/common.sh@520 -- # local subsystem config 00:10:17.106 06:35:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:17.106 06:35:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:17.106 { 00:10:17.106 "params": { 00:10:17.106 "name": "Nvme$subsystem", 00:10:17.106 "trtype": "$TEST_TRANSPORT", 00:10:17.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:17.106 "adrfam": "ipv4", 00:10:17.106 "trsvcid": "$NVMF_PORT", 00:10:17.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:17.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:17.106 "hdgst": ${hdgst:-false}, 00:10:17.106 "ddgst": ${ddgst:-false} 00:10:17.106 }, 00:10:17.106 "method": "bdev_nvme_attach_controller" 00:10:17.106 } 00:10:17.106 EOF 00:10:17.106 )") 00:10:17.106 06:35:12 -- nvmf/common.sh@542 -- # cat 00:10:17.106 06:35:12 -- nvmf/common.sh@544 -- # jq . 00:10:17.106 06:35:12 -- nvmf/common.sh@545 -- # IFS=, 00:10:17.106 06:35:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:17.106 "params": { 00:10:17.106 "name": "Nvme1", 00:10:17.106 "trtype": "tcp", 00:10:17.106 "traddr": "10.0.0.2", 00:10:17.106 "adrfam": "ipv4", 00:10:17.106 "trsvcid": "4420", 00:10:17.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:17.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:17.106 "hdgst": false, 00:10:17.106 "ddgst": false 00:10:17.106 }, 00:10:17.107 "method": "bdev_nvme_attach_controller" 00:10:17.107 }' 00:10:17.107 [2024-12-05 06:35:12.379972] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:17.107 [2024-12-05 06:35:12.380056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74368 ] 00:10:17.107 [2024-12-05 06:35:12.522027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.107 [2024-12-05 06:35:12.561912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.366 Running I/O for 10 seconds... 00:10:27.375 00:10:27.375 Latency(us) 00:10:27.375 [2024-12-05T06:35:22.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.375 [2024-12-05T06:35:22.841Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:27.375 Verification LBA range: start 0x0 length 0x1000 00:10:27.375 Nvme1n1 : 10.01 9905.49 77.39 0.00 0.00 12889.24 700.04 19422.49 00:10:27.375 [2024-12-05T06:35:22.841Z] =================================================================================================================== 00:10:27.375 [2024-12-05T06:35:22.841Z] Total : 9905.49 77.39 0.00 0.00 12889.24 700.04 19422.49 00:10:27.635 06:35:22 -- target/zcopy.sh@39 -- # perfpid=74480 00:10:27.635 06:35:22 -- target/zcopy.sh@41 -- # xtrace_disable 00:10:27.635 06:35:22 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:27.635 06:35:22 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:27.635 06:35:22 -- common/autotest_common.sh@10 -- # set +x 00:10:27.635 06:35:22 -- nvmf/common.sh@520 -- # config=() 00:10:27.635 06:35:22 -- nvmf/common.sh@520 -- # local subsystem config 00:10:27.635 06:35:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:27.635 06:35:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:27.635 { 00:10:27.635 "params": { 00:10:27.635 "name": "Nvme$subsystem", 00:10:27.635 "trtype": "$TEST_TRANSPORT", 00:10:27.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:27.635 "adrfam": "ipv4", 00:10:27.635 "trsvcid": "$NVMF_PORT", 00:10:27.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:27.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:27.635 "hdgst": ${hdgst:-false}, 00:10:27.635 "ddgst": ${ddgst:-false} 00:10:27.635 }, 00:10:27.635 "method": "bdev_nvme_attach_controller" 00:10:27.635 } 00:10:27.635 EOF 00:10:27.635 )") 00:10:27.635 [2024-12-05 06:35:22.856022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:22.856217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 06:35:22 -- nvmf/common.sh@542 -- # cat 00:10:27.635 06:35:22 -- nvmf/common.sh@544 -- # jq . 00:10:27.635 06:35:22 -- nvmf/common.sh@545 -- # IFS=, 00:10:27.635 06:35:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:27.635 "params": { 00:10:27.635 "name": "Nvme1", 00:10:27.635 "trtype": "tcp", 00:10:27.635 "traddr": "10.0.0.2", 00:10:27.635 "adrfam": "ipv4", 00:10:27.635 "trsvcid": "4420", 00:10:27.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:27.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:27.635 "hdgst": false, 00:10:27.635 "ddgst": false 00:10:27.635 }, 00:10:27.635 "method": "bdev_nvme_attach_controller" 00:10:27.635 }' 00:10:27.635 [2024-12-05 06:35:22.867998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:22.868187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:22.880016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:22.880179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:22.892000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:22.892156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:22.898469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:27.635 [2024-12-05 06:35:22.898728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74480 ] 00:10:27.635 [2024-12-05 06:35:22.904002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:22.904144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:22.916037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:22.916191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:22.928007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:22.928143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:22.940023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:22.940178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:22.952009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:22.952155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:22.964011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:22.964156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:22.976013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:22.976163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:22.988017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:22.988162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:23.000023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:23.000186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:23.012023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:23.012168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:23.024028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:23.024176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:23.035665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.635 [2024-12-05 06:35:23.036036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:23.036162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:23.048097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:23.048404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:23.060080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:23.060402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:23.070471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.635 [2024-12-05 06:35:23.072056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:23.072087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:23.084062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:23.084093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.635 [2024-12-05 06:35:23.096076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.635 [2024-12-05 06:35:23.096114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.108098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.108150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.120106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.120144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.132084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.132117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.144084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.144117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.156091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.156123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.168098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.168129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.180107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.180137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.192136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.192171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 Running I/O for 5 seconds... 00:10:27.895 [2024-12-05 06:35:23.204139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.204170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.222716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.222750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.237313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.237525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.256084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.256119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.270330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.270373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.286408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.286442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.302922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.302959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.320034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.320068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.335915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.335951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.895 [2024-12-05 06:35:23.354099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.895 [2024-12-05 06:35:23.354142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.155 [2024-12-05 06:35:23.368949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.155 [2024-12-05 06:35:23.369132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.155 [2024-12-05 06:35:23.378768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.155 [2024-12-05 06:35:23.378831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.155 [2024-12-05 06:35:23.392591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.155 [2024-12-05 06:35:23.392627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.155 [2024-12-05 06:35:23.408123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.155 [2024-12-05 06:35:23.408295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.155 [2024-12-05 06:35:23.426249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.155 [2024-12-05 06:35:23.426424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.155 [2024-12-05 06:35:23.442519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.155 [2024-12-05 06:35:23.442670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.155 [2024-12-05 06:35:23.458890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.155 [2024-12-05 06:35:23.459044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.155 [2024-12-05 06:35:23.477536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.155 [2024-12-05 06:35:23.477687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.155 [2024-12-05 06:35:23.493015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.155 [2024-12-05 06:35:23.493165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.155 [2024-12-05 06:35:23.510867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.155 [2024-12-05 06:35:23.511024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.155 [2024-12-05 06:35:23.526135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.155 [2024-12-05 06:35:23.526283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.155 [2024-12-05 06:35:23.543383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.155 [2024-12-05 06:35:23.543549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.155 [2024-12-05 06:35:23.560547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.155 [2024-12-05 06:35:23.560697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.156 [2024-12-05 06:35:23.577156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.156 [2024-12-05 06:35:23.577345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.156 [2024-12-05 06:35:23.594490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.156 [2024-12-05 06:35:23.594658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.156 [2024-12-05 06:35:23.609544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.156 [2024-12-05 06:35:23.609743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.625911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.626103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.641870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.642037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.660239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.660426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.674480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.674651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.690101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.690263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.708087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.708253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.723623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.723818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.732872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.733038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.748799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.748833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.766299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.766382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.783447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.783478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.799441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.799473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.816203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.816236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.832131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.832166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.848223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.848257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.415 [2024-12-05 06:35:23.866230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.415 [2024-12-05 06:35:23.866264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:23.881603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:23.881816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:23.897872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:23.897908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:23.907503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:23.907538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:23.923893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:23.923929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:23.942281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:23.942343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:23.955629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:23.955662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:23.972365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:23.972397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:23.988909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:23.988943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:24.005672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:24.005720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:24.022405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:24.022437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:24.039520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:24.039552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:24.055491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:24.055523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:24.074846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:24.075016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:24.090055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:24.090244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:24.106579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:24.106615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.675 [2024-12-05 06:35:24.123989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.675 [2024-12-05 06:35:24.124022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.141558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.141605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.157236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.157433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.174864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.174900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.191231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.191267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.207472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.207505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.224062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.224097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.241232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.241268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.257133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.257169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.275350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.275417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.290256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.290290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.307802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.307837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.324350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.324415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.340528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.340577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.357168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.357202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.373669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.373702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.935 [2024-12-05 06:35:24.389781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.935 [2024-12-05 06:35:24.389814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.407040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.407198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.422632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.422857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.440819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.441006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.456772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.456975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.475084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.475309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.489678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.489869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.499422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.499618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.514404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.514588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.524500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.524717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.539076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.539474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.554953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.555351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.572893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.573203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.589135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.589480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.606457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.606780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.621686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.621749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.632884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.632934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.195 [2024-12-05 06:35:24.649204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.195 [2024-12-05 06:35:24.649257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.665042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.665092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.682857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.683179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.698113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.698451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.715546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.715587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.732410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.732459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.749368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.749425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.765158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.765192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.781814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.781849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.798383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.798436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.817625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.817672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.833033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.833116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.851405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.851437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.865973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.866012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.883552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.883590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.897738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.897775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.455 [2024-12-05 06:35:24.913812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.455 [2024-12-05 06:35:24.913850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:24.929630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:24.929665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:24.941246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:24.941280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:24.959217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:24.959252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:24.973818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:24.973854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:24.989839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:24.989875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:25.007895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:25.008069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:25.022127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:25.022162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:25.038276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:25.038309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:25.055614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:25.055647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:25.072067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:25.072117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:25.089383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:25.089440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:25.104265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:25.104338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:25.120200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:25.120244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:25.136782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:25.136841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:25.154576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:25.154626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.715 [2024-12-05 06:35:25.168378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.715 [2024-12-05 06:35:25.168429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.185206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.185254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.201108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.201166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.219525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.219573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.234248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.234623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.250311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.250609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.268449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.268483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.282643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.282678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.298678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.298738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.315456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.315493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.332832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.332999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.347600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.347776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.363071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.363140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.375069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.375133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.391967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.392002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.408117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.408152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.974 [2024-12-05 06:35:25.424027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.974 [2024-12-05 06:35:25.424062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.442662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.442697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.457206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.457240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.466530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.466565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.482074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.482266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.498170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.498206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.516039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.516075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.531743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.531777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.549399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.549431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.566143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.566177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.582535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.582568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.599944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.599978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.615117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.615209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.624170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.624205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.640647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.640850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.658734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.658770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.673249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.673284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.233 [2024-12-05 06:35:25.689994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.233 [2024-12-05 06:35:25.690029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.704842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.704877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.720826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.720862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.736646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.736696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.753066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.753102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.771448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.771483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.787282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.787491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.805132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.805168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.820227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.820262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.839462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.839496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.853360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.853426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.869926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.870087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.885573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.885743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.903270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.903519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.917747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.917939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.933280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.933469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.492 [2024-12-05 06:35:25.951945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.492 [2024-12-05 06:35:25.952127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:25.966315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:25.966537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:25.983036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:25.983245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:25.999343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:25.999567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:26.017394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:26.017577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:26.032085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:26.032265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:26.048327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:26.048533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:26.064434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:26.064602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:26.081901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:26.081940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:26.096475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:26.096658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:26.112011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:26.112199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:26.122204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:26.122240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:26.137815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:26.137852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:26.152952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:26.152987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:26.169891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:26.169928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:26.185485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:26.185518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.751 [2024-12-05 06:35:26.204068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.751 [2024-12-05 06:35:26.204118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.218320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.218384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.234556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.234604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.251743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.251793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.266888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.266927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.285855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.286052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.299752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.299786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.315471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.315504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.333399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.333433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.348228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.348262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.364103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.364139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.383053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.383092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.398132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.398167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.414478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.414514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.431919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.432113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.446405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.446439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.010 [2024-12-05 06:35:26.463176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.010 [2024-12-05 06:35:26.463240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.479873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.479940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.495851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.495887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.512369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.512412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.530345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.530391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.545860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.545895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.563758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.563962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.578368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.578397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.589773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.589960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.606189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.606224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.621886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.621920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.640514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.640549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.655036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.655074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.672028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.672253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.686951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.687230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.704515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.704894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.270 [2024-12-05 06:35:26.719607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.270 [2024-12-05 06:35:26.719939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.530 [2024-12-05 06:35:26.736044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.530 [2024-12-05 06:35:26.736359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.530 [2024-12-05 06:35:26.753158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.530 [2024-12-05 06:35:26.753442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.530 [2024-12-05 06:35:26.768943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.530 [2024-12-05 06:35:26.769273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.530 [2024-12-05 06:35:26.786574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.530 [2024-12-05 06:35:26.786851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.530 [2024-12-05 06:35:26.801861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.530 [2024-12-05 06:35:26.802183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.530 [2024-12-05 06:35:26.818782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.530 [2024-12-05 06:35:26.819144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.530 [2024-12-05 06:35:26.833773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.530 [2024-12-05 06:35:26.834063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.530 [2024-12-05 06:35:26.850744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.530 [2024-12-05 06:35:26.851101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.530 [2024-12-05 06:35:26.865122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.530 [2024-12-05 06:35:26.865442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.531 [2024-12-05 06:35:26.881219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.531 [2024-12-05 06:35:26.881506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.531 [2024-12-05 06:35:26.898664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.531 [2024-12-05 06:35:26.898988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.531 [2024-12-05 06:35:26.913798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.531 [2024-12-05 06:35:26.914006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.531 [2024-12-05 06:35:26.928865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.531 [2024-12-05 06:35:26.929103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.531 [2024-12-05 06:35:26.944956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.531 [2024-12-05 06:35:26.945124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.531 [2024-12-05 06:35:26.962221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.531 [2024-12-05 06:35:26.962429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.531 [2024-12-05 06:35:26.976868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.531 [2024-12-05 06:35:26.977052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.531 [2024-12-05 06:35:26.993973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:26.994236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.011180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.011365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.027756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.027909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.044113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.044326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.060968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.061182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.078812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.079002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.095682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.095867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.112590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.112763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.130077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.130264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.145732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.145922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.163681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.163862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.178234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.178438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.194682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.194875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.211939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.212120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.227354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.227549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.237180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.237212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.790 [2024-12-05 06:35:27.252340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.790 [2024-12-05 06:35:27.252379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.268467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.268514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.286539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.286584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.301450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.301515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.318556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.318608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.335160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.335239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.351298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.351364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.368993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.369063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.383687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.383733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.400724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.400791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.415830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.415893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.427250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.427296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.443752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.443797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.459577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.459609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.478465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.478498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.493236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.493283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.050 [2024-12-05 06:35:27.502300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.050 [2024-12-05 06:35:27.502372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.518303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.518376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.528211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.528245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.543876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.543922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.553128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.553173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.569530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.569560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.588368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.588425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.602720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.602776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.619367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.619411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.635464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.635494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.653200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.653246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.668395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.668425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.680170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.680215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.696678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.696723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.713377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.713404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.729833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.729877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.745527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.745559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.310 [2024-12-05 06:35:27.763570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.310 [2024-12-05 06:35:27.763601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.569 [2024-12-05 06:35:27.778634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.569 [2024-12-05 06:35:27.778665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.569 [2024-12-05 06:35:27.797013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.569 [2024-12-05 06:35:27.797061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.569 [2024-12-05 06:35:27.811750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.569 [2024-12-05 06:35:27.811796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.569 [2024-12-05 06:35:27.829291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.569 [2024-12-05 06:35:27.829348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.570 [2024-12-05 06:35:27.845256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.570 [2024-12-05 06:35:27.845302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.570 [2024-12-05 06:35:27.863639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.570 [2024-12-05 06:35:27.863685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.570 [2024-12-05 06:35:27.878015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.570 [2024-12-05 06:35:27.878061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.570 [2024-12-05 06:35:27.887581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.570 [2024-12-05 06:35:27.887612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.570 [2024-12-05 06:35:27.903607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.570 [2024-12-05 06:35:27.903639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.570 [2024-12-05 06:35:27.920887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.570 [2024-12-05 06:35:27.920921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.570 [2024-12-05 06:35:27.937016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.570 [2024-12-05 06:35:27.937066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.570 [2024-12-05 06:35:27.953034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.570 [2024-12-05 06:35:27.953081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.570 [2024-12-05 06:35:27.970537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.570 [2024-12-05 06:35:27.970583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.570 [2024-12-05 06:35:27.985375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.570 [2024-12-05 06:35:27.985403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.570 [2024-12-05 06:35:28.001550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.570 [2024-12-05 06:35:28.001596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.570 [2024-12-05 06:35:28.017973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.570 [2024-12-05 06:35:28.018025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.829 [2024-12-05 06:35:28.034837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.829 [2024-12-05 06:35:28.034878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.829 [2024-12-05 06:35:28.051366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.829 [2024-12-05 06:35:28.051418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.829 [2024-12-05 06:35:28.068267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.829 [2024-12-05 06:35:28.068315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.829 [2024-12-05 06:35:28.083667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.829 [2024-12-05 06:35:28.083715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.829 [2024-12-05 06:35:28.100673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.829 [2024-12-05 06:35:28.100719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.829 [2024-12-05 06:35:28.114985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.829 [2024-12-05 06:35:28.115019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.829 [2024-12-05 06:35:28.131690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.829 [2024-12-05 06:35:28.131734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.829 [2024-12-05 06:35:28.147561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.829 [2024-12-05 06:35:28.147593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.829 [2024-12-05 06:35:28.166451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.829 [2024-12-05 06:35:28.166485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-12-05 06:35:28.181339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-12-05 06:35:28.181383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-12-05 06:35:28.199226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-12-05 06:35:28.199269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-12-05 06:35:28.210282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-12-05 06:35:28.210340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 00:10:32.830 Latency(us) 00:10:32.830 [2024-12-05T06:35:28.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.830 [2024-12-05T06:35:28.296Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:32.830 Nvme1n1 : 5.01 12356.96 96.54 0.00 0.00 10347.09 4140.68 25141.99 00:10:32.830 [2024-12-05T06:35:28.296Z] =================================================================================================================== 00:10:32.830 [2024-12-05T06:35:28.296Z] Total : 12356.96 96.54 0.00 0.00 10347.09 4140.68 25141.99 00:10:32.830 [2024-12-05 06:35:28.222301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-12-05 06:35:28.222368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-12-05 06:35:28.234338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-12-05 06:35:28.234394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-12-05 06:35:28.246338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-12-05 06:35:28.246398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-12-05 06:35:28.258348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-12-05 06:35:28.258403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-12-05 06:35:28.270348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-12-05 06:35:28.270402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.830 [2024-12-05 06:35:28.282317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.830 [2024-12-05 06:35:28.282395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.089 [2024-12-05 06:35:28.294308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.089 [2024-12-05 06:35:28.294343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.089 [2024-12-05 06:35:28.306345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.089 [2024-12-05 06:35:28.306392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.089 [2024-12-05 06:35:28.318371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.089 [2024-12-05 06:35:28.318403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.089 [2024-12-05 06:35:28.330406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.089 [2024-12-05 06:35:28.330455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.089 [2024-12-05 06:35:28.342365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.089 [2024-12-05 06:35:28.342392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.089 [2024-12-05 06:35:28.354368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.089 [2024-12-05 06:35:28.354405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.089 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74480) - No such process 00:10:33.089 06:35:28 -- target/zcopy.sh@49 -- # wait 74480 00:10:33.089 06:35:28 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.089 06:35:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.089 06:35:28 -- common/autotest_common.sh@10 -- # set +x 00:10:33.089 06:35:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.089 06:35:28 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:33.089 06:35:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.089 06:35:28 -- common/autotest_common.sh@10 -- # set +x 00:10:33.089 delay0 00:10:33.089 06:35:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.089 06:35:28 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:33.089 06:35:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.089 06:35:28 -- common/autotest_common.sh@10 -- # set +x 00:10:33.089 06:35:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.089 06:35:28 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:33.348 [2024-12-05 06:35:28.557787] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:39.920 Initializing NVMe Controllers 00:10:39.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:39.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:39.920 Initialization complete. Launching workers. 00:10:39.920 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 384 00:10:39.920 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 671, failed to submit 33 00:10:39.920 success 556, unsuccess 115, failed 0 00:10:39.920 06:35:34 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:39.920 06:35:34 -- target/zcopy.sh@60 -- # nvmftestfini 00:10:39.920 06:35:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:39.920 06:35:34 -- nvmf/common.sh@116 -- # sync 00:10:39.920 06:35:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:39.920 06:35:34 -- nvmf/common.sh@119 -- # set +e 00:10:39.920 06:35:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:39.920 06:35:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:39.920 rmmod nvme_tcp 00:10:39.920 rmmod nvme_fabrics 00:10:39.920 rmmod nvme_keyring 00:10:39.920 06:35:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:39.920 06:35:34 -- nvmf/common.sh@123 -- # set -e 00:10:39.920 06:35:34 -- nvmf/common.sh@124 -- # return 0 00:10:39.920 06:35:34 -- nvmf/common.sh@477 -- # '[' -n 74343 ']' 00:10:39.920 06:35:34 -- nvmf/common.sh@478 -- # killprocess 74343 00:10:39.920 06:35:34 -- common/autotest_common.sh@936 -- # '[' -z 74343 ']' 00:10:39.920 06:35:34 -- common/autotest_common.sh@940 -- # kill -0 74343 00:10:39.920 06:35:34 -- common/autotest_common.sh@941 -- # uname 00:10:39.920 06:35:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:39.920 06:35:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74343 00:10:39.920 06:35:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:39.920 killing process with pid 74343 00:10:39.920 06:35:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:39.920 06:35:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74343' 00:10:39.920 06:35:34 -- common/autotest_common.sh@955 -- # kill 74343 00:10:39.920 06:35:34 -- common/autotest_common.sh@960 -- # wait 74343 00:10:39.920 06:35:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:39.920 06:35:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:39.920 06:35:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:39.920 06:35:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.920 06:35:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:39.920 06:35:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.920 06:35:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.920 06:35:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.920 06:35:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:39.920 ************************************ 00:10:39.920 END TEST nvmf_zcopy 00:10:39.920 ************************************ 00:10:39.920 00:10:39.920 real 0m23.604s 00:10:39.920 user 0m39.108s 00:10:39.920 sys 0m6.400s 00:10:39.920 06:35:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:39.920 06:35:34 -- common/autotest_common.sh@10 -- # set +x 00:10:39.920 06:35:35 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:39.920 06:35:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:39.920 06:35:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:39.920 06:35:35 -- common/autotest_common.sh@10 -- # set +x 00:10:39.920 ************************************ 00:10:39.920 START TEST nvmf_nmic 00:10:39.920 ************************************ 00:10:39.920 06:35:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:39.920 * Looking for test storage... 00:10:39.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:39.920 06:35:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:39.920 06:35:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:39.920 06:35:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:39.920 06:35:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:39.920 06:35:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:39.920 06:35:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:39.920 06:35:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:39.920 06:35:35 -- scripts/common.sh@335 -- # IFS=.-: 00:10:39.920 06:35:35 -- scripts/common.sh@335 -- # read -ra ver1 00:10:39.920 06:35:35 -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.920 06:35:35 -- scripts/common.sh@336 -- # read -ra ver2 00:10:39.920 06:35:35 -- scripts/common.sh@337 -- # local 'op=<' 00:10:39.920 06:35:35 -- scripts/common.sh@339 -- # ver1_l=2 00:10:39.920 06:35:35 -- scripts/common.sh@340 -- # ver2_l=1 00:10:39.920 06:35:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:39.920 06:35:35 -- scripts/common.sh@343 -- # case "$op" in 00:10:39.920 06:35:35 -- scripts/common.sh@344 -- # : 1 00:10:39.920 06:35:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:39.920 06:35:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.920 06:35:35 -- scripts/common.sh@364 -- # decimal 1 00:10:39.920 06:35:35 -- scripts/common.sh@352 -- # local d=1 00:10:39.920 06:35:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.920 06:35:35 -- scripts/common.sh@354 -- # echo 1 00:10:39.920 06:35:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:39.920 06:35:35 -- scripts/common.sh@365 -- # decimal 2 00:10:39.920 06:35:35 -- scripts/common.sh@352 -- # local d=2 00:10:39.920 06:35:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.920 06:35:35 -- scripts/common.sh@354 -- # echo 2 00:10:39.920 06:35:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:39.920 06:35:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:39.920 06:35:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:39.920 06:35:35 -- scripts/common.sh@367 -- # return 0 00:10:39.920 06:35:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.920 06:35:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:39.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.920 --rc genhtml_branch_coverage=1 00:10:39.920 --rc genhtml_function_coverage=1 00:10:39.920 --rc genhtml_legend=1 00:10:39.920 --rc geninfo_all_blocks=1 00:10:39.920 --rc geninfo_unexecuted_blocks=1 00:10:39.920 00:10:39.920 ' 00:10:39.920 06:35:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:39.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.920 --rc genhtml_branch_coverage=1 00:10:39.920 --rc genhtml_function_coverage=1 00:10:39.920 --rc genhtml_legend=1 00:10:39.920 --rc geninfo_all_blocks=1 00:10:39.920 --rc geninfo_unexecuted_blocks=1 00:10:39.920 00:10:39.920 ' 00:10:39.920 06:35:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:39.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.920 --rc genhtml_branch_coverage=1 00:10:39.920 --rc genhtml_function_coverage=1 00:10:39.920 --rc genhtml_legend=1 00:10:39.920 --rc geninfo_all_blocks=1 00:10:39.920 --rc geninfo_unexecuted_blocks=1 00:10:39.920 00:10:39.920 ' 00:10:39.920 06:35:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:39.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.920 --rc genhtml_branch_coverage=1 00:10:39.920 --rc genhtml_function_coverage=1 00:10:39.920 --rc genhtml_legend=1 00:10:39.920 --rc geninfo_all_blocks=1 00:10:39.920 --rc geninfo_unexecuted_blocks=1 00:10:39.920 00:10:39.920 ' 00:10:39.920 06:35:35 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:39.920 06:35:35 -- nvmf/common.sh@7 -- # uname -s 00:10:39.920 06:35:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.920 06:35:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.920 06:35:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.920 06:35:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.920 06:35:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.920 06:35:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.920 06:35:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.920 06:35:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.920 06:35:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.920 06:35:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.920 06:35:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:10:39.920 06:35:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:10:39.920 06:35:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.920 06:35:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.920 06:35:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:39.920 06:35:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:39.920 06:35:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.920 06:35:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.920 06:35:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.921 06:35:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.921 06:35:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.921 06:35:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.921 06:35:35 -- paths/export.sh@5 -- # export PATH 00:10:39.921 06:35:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.921 06:35:35 -- nvmf/common.sh@46 -- # : 0 00:10:39.921 06:35:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:39.921 06:35:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:39.921 06:35:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:39.921 06:35:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.921 06:35:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.921 06:35:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:39.921 06:35:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:39.921 06:35:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:39.921 06:35:35 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:39.921 06:35:35 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:39.921 06:35:35 -- target/nmic.sh@14 -- # nvmftestinit 00:10:39.921 06:35:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:39.921 06:35:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.921 06:35:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:39.921 06:35:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:39.921 06:35:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:39.921 06:35:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.921 06:35:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.921 06:35:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.921 06:35:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:39.921 06:35:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:39.921 06:35:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:39.921 06:35:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:39.921 06:35:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:39.921 06:35:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:39.921 06:35:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.921 06:35:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.921 06:35:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:39.921 06:35:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:39.921 06:35:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:39.921 06:35:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:39.921 06:35:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:39.921 06:35:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.921 06:35:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:39.921 06:35:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:39.921 06:35:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:39.921 06:35:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:39.921 06:35:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:39.921 06:35:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:39.921 Cannot find device "nvmf_tgt_br" 00:10:39.921 06:35:35 -- nvmf/common.sh@154 -- # true 00:10:39.921 06:35:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:39.921 Cannot find device "nvmf_tgt_br2" 00:10:39.921 06:35:35 -- nvmf/common.sh@155 -- # true 00:10:39.921 06:35:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:39.921 06:35:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:39.921 Cannot find device "nvmf_tgt_br" 00:10:39.921 06:35:35 -- nvmf/common.sh@157 -- # true 00:10:39.921 06:35:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:39.921 Cannot find device "nvmf_tgt_br2" 00:10:39.921 06:35:35 -- nvmf/common.sh@158 -- # true 00:10:39.921 06:35:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:39.921 06:35:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:40.179 06:35:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.179 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.179 06:35:35 -- nvmf/common.sh@161 -- # true 00:10:40.179 06:35:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.179 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.179 06:35:35 -- nvmf/common.sh@162 -- # true 00:10:40.179 06:35:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.179 06:35:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.179 06:35:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.179 06:35:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.179 06:35:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.179 06:35:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:40.179 06:35:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:40.179 06:35:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:40.179 06:35:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:40.179 06:35:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:40.179 06:35:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:40.179 06:35:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:40.179 06:35:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:40.179 06:35:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:40.179 06:35:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:40.179 06:35:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:40.179 06:35:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:40.179 06:35:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:40.179 06:35:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:40.179 06:35:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:40.179 06:35:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:40.180 06:35:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:40.180 06:35:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:40.180 06:35:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:40.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:10:40.180 00:10:40.180 --- 10.0.0.2 ping statistics --- 00:10:40.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.180 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:10:40.180 06:35:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:40.180 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:40.180 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:10:40.180 00:10:40.180 --- 10.0.0.3 ping statistics --- 00:10:40.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.180 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:40.180 06:35:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:40.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:10:40.180 00:10:40.180 --- 10.0.0.1 ping statistics --- 00:10:40.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.180 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:40.180 06:35:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.180 06:35:35 -- nvmf/common.sh@421 -- # return 0 00:10:40.180 06:35:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:40.180 06:35:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.180 06:35:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:40.180 06:35:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:40.180 06:35:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.180 06:35:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:40.180 06:35:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:40.180 06:35:35 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:40.180 06:35:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:40.180 06:35:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:40.180 06:35:35 -- common/autotest_common.sh@10 -- # set +x 00:10:40.180 06:35:35 -- nvmf/common.sh@469 -- # nvmfpid=74812 00:10:40.180 06:35:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.180 06:35:35 -- nvmf/common.sh@470 -- # waitforlisten 74812 00:10:40.180 06:35:35 -- common/autotest_common.sh@829 -- # '[' -z 74812 ']' 00:10:40.180 06:35:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.180 06:35:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.180 06:35:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.180 06:35:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.180 06:35:35 -- common/autotest_common.sh@10 -- # set +x 00:10:40.438 [2024-12-05 06:35:35.675034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:40.438 [2024-12-05 06:35:35.675140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.438 [2024-12-05 06:35:35.816983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.438 [2024-12-05 06:35:35.858637] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:40.438 [2024-12-05 06:35:35.858823] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.438 [2024-12-05 06:35:35.858840] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.438 [2024-12-05 06:35:35.858852] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.438 [2024-12-05 06:35:35.858933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.438 [2024-12-05 06:35:35.859074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.438 [2024-12-05 06:35:35.859667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.438 [2024-12-05 06:35:35.859719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.372 06:35:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.372 06:35:36 -- common/autotest_common.sh@862 -- # return 0 00:10:41.372 06:35:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:41.372 06:35:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:41.372 06:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:41.372 06:35:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.372 06:35:36 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:41.372 06:35:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.372 06:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:41.372 [2024-12-05 06:35:36.774122] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.372 06:35:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.372 06:35:36 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:41.372 06:35:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.372 06:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:41.372 Malloc0 00:10:41.372 06:35:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.372 06:35:36 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:41.372 06:35:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.372 06:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:41.372 06:35:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.372 06:35:36 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:41.372 06:35:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.372 06:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:41.372 06:35:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.372 06:35:36 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.372 06:35:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.372 06:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:41.630 [2024-12-05 06:35:36.839723] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.630 06:35:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.630 test case1: single bdev can't be used in multiple subsystems 00:10:41.630 06:35:36 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:41.630 06:35:36 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:41.630 06:35:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.630 06:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:41.630 06:35:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.630 06:35:36 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:41.630 06:35:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.630 06:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:41.630 06:35:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.630 06:35:36 -- target/nmic.sh@28 -- # nmic_status=0 00:10:41.630 06:35:36 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:41.630 06:35:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.630 06:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:41.630 [2024-12-05 06:35:36.863521] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:41.630 [2024-12-05 06:35:36.863574] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:41.630 [2024-12-05 06:35:36.863602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.630 request: 00:10:41.630 { 00:10:41.630 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:41.630 "namespace": { 00:10:41.630 "bdev_name": "Malloc0" 00:10:41.630 }, 00:10:41.630 "method": "nvmf_subsystem_add_ns", 00:10:41.630 "req_id": 1 00:10:41.630 } 00:10:41.630 Got JSON-RPC error response 00:10:41.630 response: 00:10:41.630 { 00:10:41.630 "code": -32602, 00:10:41.630 "message": "Invalid parameters" 00:10:41.630 } 00:10:41.630 06:35:36 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:41.630 06:35:36 -- target/nmic.sh@29 -- # nmic_status=1 00:10:41.630 06:35:36 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:41.630 Adding namespace failed - expected result. 00:10:41.630 06:35:36 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:41.630 test case2: host connect to nvmf target in multiple paths 00:10:41.630 06:35:36 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:41.630 06:35:36 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:41.630 06:35:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.630 06:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:41.630 [2024-12-05 06:35:36.875694] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:41.630 06:35:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.630 06:35:36 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.630 06:35:37 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:41.889 06:35:37 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:41.889 06:35:37 -- common/autotest_common.sh@1187 -- # local i=0 00:10:41.889 06:35:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.889 06:35:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:41.889 06:35:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:43.790 06:35:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:43.790 06:35:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:43.790 06:35:39 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.790 06:35:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:43.790 06:35:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.790 06:35:39 -- common/autotest_common.sh@1197 -- # return 0 00:10:43.790 06:35:39 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:43.790 [global] 00:10:43.790 thread=1 00:10:43.790 invalidate=1 00:10:43.790 rw=write 00:10:43.790 time_based=1 00:10:43.790 runtime=1 00:10:43.790 ioengine=libaio 00:10:43.790 direct=1 00:10:43.790 bs=4096 00:10:43.790 iodepth=1 00:10:43.790 norandommap=0 00:10:43.790 numjobs=1 00:10:43.790 00:10:43.790 verify_dump=1 00:10:43.790 verify_backlog=512 00:10:43.790 verify_state_save=0 00:10:43.790 do_verify=1 00:10:43.790 verify=crc32c-intel 00:10:43.790 [job0] 00:10:43.790 filename=/dev/nvme0n1 00:10:43.790 Could not set queue depth (nvme0n1) 00:10:44.049 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.049 fio-3.35 00:10:44.049 Starting 1 thread 00:10:44.985 00:10:44.985 job0: (groupid=0, jobs=1): err= 0: pid=74898: Thu Dec 5 06:35:40 2024 00:10:44.985 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:44.985 slat (usec): min=10, max=120, avg=12.87, stdev= 6.19 00:10:44.985 clat (usec): min=128, max=626, avg=177.77, stdev=24.97 00:10:44.985 lat (usec): min=138, max=637, avg=190.63, stdev=26.03 00:10:44.985 clat percentiles (usec): 00:10:44.985 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 157], 00:10:44.985 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:10:44.985 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 210], 95.00th=[ 223], 00:10:44.985 | 99.00th=[ 249], 99.50th=[ 258], 99.90th=[ 273], 99.95th=[ 277], 00:10:44.985 | 99.99th=[ 627] 00:10:44.985 write: IOPS=3117, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec); 0 zone resets 00:10:44.985 slat (usec): min=12, max=127, avg=19.90, stdev= 6.35 00:10:44.985 clat (usec): min=66, max=750, avg=110.08, stdev=21.99 00:10:44.985 lat (usec): min=98, max=771, avg=129.98, stdev=23.69 00:10:44.985 clat percentiles (usec): 00:10:44.985 | 1.00th=[ 84], 5.00th=[ 88], 10.00th=[ 91], 20.00th=[ 95], 00:10:44.985 | 30.00th=[ 98], 40.00th=[ 102], 50.00th=[ 105], 60.00th=[ 110], 00:10:44.985 | 70.00th=[ 117], 80.00th=[ 125], 90.00th=[ 137], 95.00th=[ 147], 00:10:44.985 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 206], 99.95th=[ 338], 00:10:44.985 | 99.99th=[ 750] 00:10:44.985 bw ( KiB/s): min=12288, max=12288, per=98.53%, avg=12288.00, stdev= 0.00, samples=1 00:10:44.985 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:44.985 lat (usec) : 100=18.05%, 250=81.53%, 500=0.39%, 750=0.02%, 1000=0.02% 00:10:44.985 cpu : usr=2.80%, sys=7.10%, ctx=6205, majf=0, minf=5 00:10:44.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.985 issued rwts: total=3072,3121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.985 00:10:44.985 Run status group 0 (all jobs): 00:10:44.985 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:44.985 WRITE: bw=12.2MiB/s (12.8MB/s), 12.2MiB/s-12.2MiB/s (12.8MB/s-12.8MB/s), io=12.2MiB (12.8MB), run=1001-1001msec 00:10:44.985 00:10:44.985 Disk stats (read/write): 00:10:44.985 nvme0n1: ios=2618/3072, merge=0/0, ticks=503/391, in_queue=894, util=91.48% 00:10:44.985 06:35:40 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:45.244 06:35:40 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.244 06:35:40 -- common/autotest_common.sh@1208 -- # local i=0 00:10:45.244 06:35:40 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:45.244 06:35:40 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.244 06:35:40 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:45.244 06:35:40 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.244 06:35:40 -- common/autotest_common.sh@1220 -- # return 0 00:10:45.244 06:35:40 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:45.244 06:35:40 -- target/nmic.sh@53 -- # nvmftestfini 00:10:45.244 06:35:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:45.244 06:35:40 -- nvmf/common.sh@116 -- # sync 00:10:45.244 06:35:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:45.244 06:35:40 -- nvmf/common.sh@119 -- # set +e 00:10:45.244 06:35:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:45.244 06:35:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:45.244 rmmod nvme_tcp 00:10:45.244 rmmod nvme_fabrics 00:10:45.244 rmmod nvme_keyring 00:10:45.244 06:35:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:45.244 06:35:40 -- nvmf/common.sh@123 -- # set -e 00:10:45.244 06:35:40 -- nvmf/common.sh@124 -- # return 0 00:10:45.244 06:35:40 -- nvmf/common.sh@477 -- # '[' -n 74812 ']' 00:10:45.244 06:35:40 -- nvmf/common.sh@478 -- # killprocess 74812 00:10:45.244 06:35:40 -- common/autotest_common.sh@936 -- # '[' -z 74812 ']' 00:10:45.244 06:35:40 -- common/autotest_common.sh@940 -- # kill -0 74812 00:10:45.244 06:35:40 -- common/autotest_common.sh@941 -- # uname 00:10:45.244 06:35:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:45.244 06:35:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74812 00:10:45.244 06:35:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:45.244 06:35:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:45.244 killing process with pid 74812 00:10:45.244 06:35:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74812' 00:10:45.244 06:35:40 -- common/autotest_common.sh@955 -- # kill 74812 00:10:45.244 06:35:40 -- common/autotest_common.sh@960 -- # wait 74812 00:10:45.503 06:35:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:45.503 06:35:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:45.503 06:35:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:45.503 06:35:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:45.503 06:35:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:45.503 06:35:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.503 06:35:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:45.503 06:35:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.503 06:35:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:45.503 00:10:45.503 real 0m5.773s 00:10:45.503 user 0m18.632s 00:10:45.503 sys 0m2.212s 00:10:45.503 06:35:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:45.503 06:35:40 -- common/autotest_common.sh@10 -- # set +x 00:10:45.503 ************************************ 00:10:45.503 END TEST nvmf_nmic 00:10:45.503 ************************************ 00:10:45.503 06:35:40 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:45.503 06:35:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:45.503 06:35:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:45.503 06:35:40 -- common/autotest_common.sh@10 -- # set +x 00:10:45.503 ************************************ 00:10:45.503 START TEST nvmf_fio_target 00:10:45.503 ************************************ 00:10:45.503 06:35:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:45.503 * Looking for test storage... 00:10:45.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:45.503 06:35:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:45.503 06:35:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:45.503 06:35:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:45.762 06:35:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:45.762 06:35:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:45.762 06:35:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:45.762 06:35:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:45.762 06:35:41 -- scripts/common.sh@335 -- # IFS=.-: 00:10:45.762 06:35:41 -- scripts/common.sh@335 -- # read -ra ver1 00:10:45.762 06:35:41 -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.762 06:35:41 -- scripts/common.sh@336 -- # read -ra ver2 00:10:45.762 06:35:41 -- scripts/common.sh@337 -- # local 'op=<' 00:10:45.762 06:35:41 -- scripts/common.sh@339 -- # ver1_l=2 00:10:45.762 06:35:41 -- scripts/common.sh@340 -- # ver2_l=1 00:10:45.762 06:35:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:45.762 06:35:41 -- scripts/common.sh@343 -- # case "$op" in 00:10:45.762 06:35:41 -- scripts/common.sh@344 -- # : 1 00:10:45.762 06:35:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:45.762 06:35:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.762 06:35:41 -- scripts/common.sh@364 -- # decimal 1 00:10:45.762 06:35:41 -- scripts/common.sh@352 -- # local d=1 00:10:45.762 06:35:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.762 06:35:41 -- scripts/common.sh@354 -- # echo 1 00:10:45.762 06:35:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:45.762 06:35:41 -- scripts/common.sh@365 -- # decimal 2 00:10:45.762 06:35:41 -- scripts/common.sh@352 -- # local d=2 00:10:45.762 06:35:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.762 06:35:41 -- scripts/common.sh@354 -- # echo 2 00:10:45.762 06:35:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:45.762 06:35:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:45.762 06:35:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:45.762 06:35:41 -- scripts/common.sh@367 -- # return 0 00:10:45.762 06:35:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.763 06:35:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:45.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.763 --rc genhtml_branch_coverage=1 00:10:45.763 --rc genhtml_function_coverage=1 00:10:45.763 --rc genhtml_legend=1 00:10:45.763 --rc geninfo_all_blocks=1 00:10:45.763 --rc geninfo_unexecuted_blocks=1 00:10:45.763 00:10:45.763 ' 00:10:45.763 06:35:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:45.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.763 --rc genhtml_branch_coverage=1 00:10:45.763 --rc genhtml_function_coverage=1 00:10:45.763 --rc genhtml_legend=1 00:10:45.763 --rc geninfo_all_blocks=1 00:10:45.763 --rc geninfo_unexecuted_blocks=1 00:10:45.763 00:10:45.763 ' 00:10:45.763 06:35:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:45.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.763 --rc genhtml_branch_coverage=1 00:10:45.763 --rc genhtml_function_coverage=1 00:10:45.763 --rc genhtml_legend=1 00:10:45.763 --rc geninfo_all_blocks=1 00:10:45.763 --rc geninfo_unexecuted_blocks=1 00:10:45.763 00:10:45.763 ' 00:10:45.763 06:35:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:45.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.763 --rc genhtml_branch_coverage=1 00:10:45.763 --rc genhtml_function_coverage=1 00:10:45.763 --rc genhtml_legend=1 00:10:45.763 --rc geninfo_all_blocks=1 00:10:45.763 --rc geninfo_unexecuted_blocks=1 00:10:45.763 00:10:45.763 ' 00:10:45.763 06:35:41 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:45.763 06:35:41 -- nvmf/common.sh@7 -- # uname -s 00:10:45.763 06:35:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.763 06:35:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.763 06:35:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.763 06:35:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.763 06:35:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.763 06:35:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.763 06:35:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.763 06:35:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.763 06:35:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.763 06:35:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.763 06:35:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:10:45.763 06:35:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:10:45.763 06:35:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.763 06:35:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.763 06:35:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:45.763 06:35:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:45.763 06:35:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.763 06:35:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.763 06:35:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.763 06:35:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.763 06:35:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.763 06:35:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.763 06:35:41 -- paths/export.sh@5 -- # export PATH 00:10:45.763 06:35:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.763 06:35:41 -- nvmf/common.sh@46 -- # : 0 00:10:45.763 06:35:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:45.763 06:35:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:45.763 06:35:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:45.763 06:35:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.763 06:35:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.763 06:35:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:45.763 06:35:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:45.763 06:35:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:45.763 06:35:41 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:45.763 06:35:41 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:45.763 06:35:41 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.763 06:35:41 -- target/fio.sh@16 -- # nvmftestinit 00:10:45.763 06:35:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:45.763 06:35:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.763 06:35:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:45.763 06:35:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:45.763 06:35:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:45.763 06:35:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.763 06:35:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:45.763 06:35:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.763 06:35:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:45.763 06:35:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:45.763 06:35:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:45.763 06:35:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:45.763 06:35:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:45.763 06:35:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:45.763 06:35:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.763 06:35:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.763 06:35:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:45.763 06:35:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:45.763 06:35:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:45.763 06:35:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:45.763 06:35:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:45.763 06:35:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.763 06:35:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:45.763 06:35:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:45.763 06:35:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:45.763 06:35:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:45.763 06:35:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:45.763 06:35:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:45.763 Cannot find device "nvmf_tgt_br" 00:10:45.763 06:35:41 -- nvmf/common.sh@154 -- # true 00:10:45.763 06:35:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:45.763 Cannot find device "nvmf_tgt_br2" 00:10:45.763 06:35:41 -- nvmf/common.sh@155 -- # true 00:10:45.763 06:35:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:45.763 06:35:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:45.763 Cannot find device "nvmf_tgt_br" 00:10:45.763 06:35:41 -- nvmf/common.sh@157 -- # true 00:10:45.763 06:35:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:45.763 Cannot find device "nvmf_tgt_br2" 00:10:45.763 06:35:41 -- nvmf/common.sh@158 -- # true 00:10:45.763 06:35:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:45.763 06:35:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:45.763 06:35:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:45.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.763 06:35:41 -- nvmf/common.sh@161 -- # true 00:10:45.763 06:35:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:45.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.763 06:35:41 -- nvmf/common.sh@162 -- # true 00:10:45.763 06:35:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:46.023 06:35:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:46.023 06:35:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:46.023 06:35:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:46.023 06:35:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:46.023 06:35:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:46.023 06:35:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:46.023 06:35:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:46.023 06:35:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:46.023 06:35:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:46.023 06:35:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:46.023 06:35:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:46.023 06:35:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:46.023 06:35:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:46.023 06:35:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:46.023 06:35:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:46.023 06:35:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:46.023 06:35:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:46.023 06:35:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:46.023 06:35:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:46.023 06:35:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:46.023 06:35:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:46.023 06:35:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:46.023 06:35:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:46.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:10:46.023 00:10:46.023 --- 10.0.0.2 ping statistics --- 00:10:46.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.023 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:46.023 06:35:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:46.023 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:46.023 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:10:46.023 00:10:46.023 --- 10.0.0.3 ping statistics --- 00:10:46.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.023 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:46.023 06:35:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:46.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:10:46.023 00:10:46.023 --- 10.0.0.1 ping statistics --- 00:10:46.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.023 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:46.023 06:35:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.023 06:35:41 -- nvmf/common.sh@421 -- # return 0 00:10:46.023 06:35:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:46.023 06:35:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.023 06:35:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:46.023 06:35:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:46.023 06:35:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.023 06:35:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:46.023 06:35:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:46.023 06:35:41 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:46.023 06:35:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:46.023 06:35:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:46.023 06:35:41 -- common/autotest_common.sh@10 -- # set +x 00:10:46.023 06:35:41 -- nvmf/common.sh@469 -- # nvmfpid=75088 00:10:46.023 06:35:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.023 06:35:41 -- nvmf/common.sh@470 -- # waitforlisten 75088 00:10:46.023 06:35:41 -- common/autotest_common.sh@829 -- # '[' -z 75088 ']' 00:10:46.023 06:35:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.023 06:35:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:46.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.023 06:35:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.023 06:35:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:46.023 06:35:41 -- common/autotest_common.sh@10 -- # set +x 00:10:46.023 [2024-12-05 06:35:41.484730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:46.023 [2024-12-05 06:35:41.484847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.282 [2024-12-05 06:35:41.622798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.282 [2024-12-05 06:35:41.656869] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:46.282 [2024-12-05 06:35:41.657002] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.282 [2024-12-05 06:35:41.657014] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.282 [2024-12-05 06:35:41.657022] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.282 [2024-12-05 06:35:41.657182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.282 [2024-12-05 06:35:41.657367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.282 [2024-12-05 06:35:41.659462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.282 [2024-12-05 06:35:41.659474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.253 06:35:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:47.253 06:35:42 -- common/autotest_common.sh@862 -- # return 0 00:10:47.253 06:35:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:47.253 06:35:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:47.253 06:35:42 -- common/autotest_common.sh@10 -- # set +x 00:10:47.253 06:35:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.253 06:35:42 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:47.513 [2024-12-05 06:35:42.769318] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.513 06:35:42 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.773 06:35:43 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:47.773 06:35:43 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.033 06:35:43 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:48.033 06:35:43 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.292 06:35:43 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:48.292 06:35:43 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.551 06:35:43 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:48.551 06:35:43 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:48.810 06:35:44 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.069 06:35:44 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:49.069 06:35:44 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.328 06:35:44 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:49.328 06:35:44 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.587 06:35:44 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:49.587 06:35:44 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:49.852 06:35:45 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:50.113 06:35:45 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:50.113 06:35:45 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:50.371 06:35:45 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:50.371 06:35:45 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:50.631 06:35:45 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.890 [2024-12-05 06:35:46.146931] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.890 06:35:46 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:51.150 06:35:46 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:51.410 06:35:46 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:51.410 06:35:46 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:51.410 06:35:46 -- common/autotest_common.sh@1187 -- # local i=0 00:10:51.410 06:35:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.410 06:35:46 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:10:51.410 06:35:46 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:10:51.410 06:35:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:53.949 06:35:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:53.949 06:35:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:53.949 06:35:48 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:53.949 06:35:48 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:10:53.949 06:35:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:53.949 06:35:48 -- common/autotest_common.sh@1197 -- # return 0 00:10:53.949 06:35:48 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:53.949 [global] 00:10:53.949 thread=1 00:10:53.949 invalidate=1 00:10:53.949 rw=write 00:10:53.949 time_based=1 00:10:53.949 runtime=1 00:10:53.949 ioengine=libaio 00:10:53.949 direct=1 00:10:53.949 bs=4096 00:10:53.949 iodepth=1 00:10:53.949 norandommap=0 00:10:53.949 numjobs=1 00:10:53.949 00:10:53.949 verify_dump=1 00:10:53.949 verify_backlog=512 00:10:53.949 verify_state_save=0 00:10:53.949 do_verify=1 00:10:53.949 verify=crc32c-intel 00:10:53.949 [job0] 00:10:53.949 filename=/dev/nvme0n1 00:10:53.949 [job1] 00:10:53.949 filename=/dev/nvme0n2 00:10:53.949 [job2] 00:10:53.949 filename=/dev/nvme0n3 00:10:53.949 [job3] 00:10:53.949 filename=/dev/nvme0n4 00:10:53.949 Could not set queue depth (nvme0n1) 00:10:53.949 Could not set queue depth (nvme0n2) 00:10:53.949 Could not set queue depth (nvme0n3) 00:10:53.949 Could not set queue depth (nvme0n4) 00:10:53.949 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.949 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.949 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.949 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.949 fio-3.35 00:10:53.949 Starting 4 threads 00:10:54.889 00:10:54.889 job0: (groupid=0, jobs=1): err= 0: pid=75273: Thu Dec 5 06:35:50 2024 00:10:54.889 read: IOPS=2056, BW=8228KiB/s (8425kB/s)(8236KiB/1001msec) 00:10:54.889 slat (nsec): min=7660, max=43246, avg=10928.67, stdev=3905.71 00:10:54.889 clat (usec): min=141, max=597, avg=229.64, stdev=30.61 00:10:54.889 lat (usec): min=155, max=606, avg=240.57, stdev=30.59 00:10:54.889 clat percentiles (usec): 00:10:54.889 | 1.00th=[ 172], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 206], 00:10:54.889 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 235], 00:10:54.889 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 277], 00:10:54.889 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 445], 99.95th=[ 586], 00:10:54.889 | 99.99th=[ 594] 00:10:54.889 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:54.889 slat (usec): min=12, max=113, avg=21.27, stdev= 6.77 00:10:54.889 clat (usec): min=94, max=336, avg=173.18, stdev=34.51 00:10:54.889 lat (usec): min=117, max=358, avg=194.45, stdev=33.58 00:10:54.889 clat percentiles (usec): 00:10:54.889 | 1.00th=[ 105], 5.00th=[ 116], 10.00th=[ 123], 20.00th=[ 143], 00:10:54.889 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 182], 00:10:54.889 | 70.00th=[ 192], 80.00th=[ 202], 90.00th=[ 217], 95.00th=[ 231], 00:10:54.889 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 302], 99.95th=[ 310], 00:10:54.889 | 99.99th=[ 338] 00:10:54.889 bw ( KiB/s): min=10664, max=10664, per=29.32%, avg=10664.00, stdev= 0.00, samples=1 00:10:54.889 iops : min= 2666, max= 2666, avg=2666.00, stdev= 0.00, samples=1 00:10:54.889 lat (usec) : 100=0.19%, 250=90.37%, 500=9.40%, 750=0.04% 00:10:54.889 cpu : usr=1.20%, sys=6.80%, ctx=4621, majf=0, minf=9 00:10:54.889 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.889 issued rwts: total=2059,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.889 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.889 job1: (groupid=0, jobs=1): err= 0: pid=75274: Thu Dec 5 06:35:50 2024 00:10:54.889 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:54.889 slat (nsec): min=9800, max=61819, avg=14229.45, stdev=4612.63 00:10:54.889 clat (usec): min=157, max=589, avg=227.49, stdev=30.05 00:10:54.889 lat (usec): min=172, max=602, avg=241.72, stdev=30.62 00:10:54.889 clat percentiles (usec): 00:10:54.889 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 204], 00:10:54.889 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:10:54.889 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 277], 00:10:54.889 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 433], 99.95th=[ 441], 00:10:54.889 | 99.99th=[ 594] 00:10:54.889 write: IOPS=2443, BW=9774KiB/s (10.0MB/s)(9784KiB/1001msec); 0 zone resets 00:10:54.889 slat (nsec): min=10151, max=85021, avg=18672.78, stdev=7763.14 00:10:54.889 clat (usec): min=92, max=3805, avg=184.97, stdev=128.28 00:10:54.889 lat (usec): min=112, max=3886, avg=203.64, stdev=128.74 00:10:54.889 clat percentiles (usec): 00:10:54.889 | 1.00th=[ 103], 5.00th=[ 116], 10.00th=[ 126], 20.00th=[ 151], 00:10:54.889 | 30.00th=[ 163], 40.00th=[ 174], 50.00th=[ 182], 60.00th=[ 190], 00:10:54.889 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 223], 95.00th=[ 233], 00:10:54.889 | 99.00th=[ 262], 99.50th=[ 318], 99.90th=[ 2442], 99.95th=[ 3195], 00:10:54.889 | 99.99th=[ 3818] 00:10:54.889 bw ( KiB/s): min= 9779, max= 9779, per=26.89%, avg=9779.00, stdev= 0.00, samples=1 00:10:54.889 iops : min= 2444, max= 2444, avg=2444.00, stdev= 0.00, samples=1 00:10:54.889 lat (usec) : 100=0.11%, 250=90.68%, 500=8.97%, 750=0.07% 00:10:54.889 lat (msec) : 2=0.09%, 4=0.09% 00:10:54.889 cpu : usr=2.00%, sys=6.20%, ctx=4497, majf=0, minf=7 00:10:54.889 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.889 issued rwts: total=2048,2446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.889 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.889 job2: (groupid=0, jobs=1): err= 0: pid=75275: Thu Dec 5 06:35:50 2024 00:10:54.889 read: IOPS=1910, BW=7640KiB/s (7824kB/s)(7648KiB/1001msec) 00:10:54.889 slat (nsec): min=11040, max=57398, avg=14689.39, stdev=4785.53 00:10:54.889 clat (usec): min=199, max=646, avg=255.64, stdev=27.89 00:10:54.889 lat (usec): min=211, max=660, avg=270.33, stdev=28.65 00:10:54.889 clat percentiles (usec): 00:10:54.889 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 233], 00:10:54.889 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 262], 00:10:54.889 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 306], 00:10:54.889 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 363], 99.95th=[ 644], 00:10:54.889 | 99.99th=[ 644] 00:10:54.889 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:54.889 slat (nsec): min=12891, max=85594, avg=20624.35, stdev=5764.24 00:10:54.889 clat (usec): min=152, max=402, avg=212.19, stdev=30.07 00:10:54.889 lat (usec): min=170, max=423, avg=232.81, stdev=31.04 00:10:54.889 clat percentiles (usec): 00:10:54.889 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 188], 00:10:54.889 | 30.00th=[ 194], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 217], 00:10:54.889 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 251], 95.00th=[ 265], 00:10:54.889 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 355], 99.95th=[ 392], 00:10:54.889 | 99.99th=[ 404] 00:10:54.889 bw ( KiB/s): min= 8192, max= 8192, per=22.52%, avg=8192.00, stdev= 0.00, samples=1 00:10:54.889 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:54.889 lat (usec) : 250=67.93%, 500=32.05%, 750=0.03% 00:10:54.889 cpu : usr=1.70%, sys=6.00%, ctx=3960, majf=0, minf=15 00:10:54.889 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.889 issued rwts: total=1912,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.890 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.890 job3: (groupid=0, jobs=1): err= 0: pid=75276: Thu Dec 5 06:35:50 2024 00:10:54.890 read: IOPS=1910, BW=7640KiB/s (7824kB/s)(7648KiB/1001msec) 00:10:54.890 slat (nsec): min=7572, max=39379, avg=9950.59, stdev=3198.85 00:10:54.890 clat (usec): min=171, max=720, avg=260.93, stdev=30.45 00:10:54.890 lat (usec): min=200, max=729, avg=270.88, stdev=30.77 00:10:54.890 clat percentiles (usec): 00:10:54.890 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 237], 00:10:54.890 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:10:54.890 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 310], 00:10:54.890 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 676], 99.95th=[ 717], 00:10:54.890 | 99.99th=[ 717] 00:10:54.890 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:54.890 slat (nsec): min=9896, max=75197, avg=18302.39, stdev=6862.16 00:10:54.890 clat (usec): min=157, max=417, avg=214.76, stdev=29.84 00:10:54.890 lat (usec): min=170, max=431, avg=233.06, stdev=31.00 00:10:54.890 clat percentiles (usec): 00:10:54.890 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:10:54.890 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 219], 00:10:54.890 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 273], 00:10:54.890 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 371], 99.95th=[ 408], 00:10:54.890 | 99.99th=[ 416] 00:10:54.890 bw ( KiB/s): min= 8192, max= 8192, per=22.52%, avg=8192.00, stdev= 0.00, samples=1 00:10:54.890 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:54.890 lat (usec) : 250=64.49%, 500=35.45%, 750=0.05% 00:10:54.890 cpu : usr=1.40%, sys=4.60%, ctx=3960, majf=0, minf=7 00:10:54.890 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.890 issued rwts: total=1912,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.890 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.890 00:10:54.890 Run status group 0 (all jobs): 00:10:54.890 READ: bw=30.9MiB/s (32.5MB/s), 7640KiB/s-8228KiB/s (7824kB/s-8425kB/s), io=31.0MiB (32.5MB), run=1001-1001msec 00:10:54.890 WRITE: bw=35.5MiB/s (37.2MB/s), 8184KiB/s-9.99MiB/s (8380kB/s-10.5MB/s), io=35.6MiB (37.3MB), run=1001-1001msec 00:10:54.890 00:10:54.890 Disk stats (read/write): 00:10:54.890 nvme0n1: ios=1945/2048, merge=0/0, ticks=443/368, in_queue=811, util=87.86% 00:10:54.890 nvme0n2: ios=1806/2048, merge=0/0, ticks=432/360, in_queue=792, util=86.67% 00:10:54.890 nvme0n3: ios=1536/1859, merge=0/0, ticks=399/398, in_queue=797, util=89.20% 00:10:54.890 nvme0n4: ios=1536/1859, merge=0/0, ticks=374/384, in_queue=758, util=89.76% 00:10:54.890 06:35:50 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:54.890 [global] 00:10:54.890 thread=1 00:10:54.890 invalidate=1 00:10:54.890 rw=randwrite 00:10:54.890 time_based=1 00:10:54.890 runtime=1 00:10:54.890 ioengine=libaio 00:10:54.890 direct=1 00:10:54.890 bs=4096 00:10:54.890 iodepth=1 00:10:54.890 norandommap=0 00:10:54.890 numjobs=1 00:10:54.890 00:10:54.890 verify_dump=1 00:10:54.890 verify_backlog=512 00:10:54.890 verify_state_save=0 00:10:54.890 do_verify=1 00:10:54.890 verify=crc32c-intel 00:10:54.890 [job0] 00:10:54.890 filename=/dev/nvme0n1 00:10:54.890 [job1] 00:10:54.890 filename=/dev/nvme0n2 00:10:54.890 [job2] 00:10:54.890 filename=/dev/nvme0n3 00:10:54.890 [job3] 00:10:54.890 filename=/dev/nvme0n4 00:10:54.890 Could not set queue depth (nvme0n1) 00:10:54.890 Could not set queue depth (nvme0n2) 00:10:54.890 Could not set queue depth (nvme0n3) 00:10:54.890 Could not set queue depth (nvme0n4) 00:10:55.150 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.150 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.150 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.150 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.150 fio-3.35 00:10:55.150 Starting 4 threads 00:10:56.527 00:10:56.527 job0: (groupid=0, jobs=1): err= 0: pid=75334: Thu Dec 5 06:35:51 2024 00:10:56.527 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:56.527 slat (nsec): min=10013, max=41518, avg=12663.98, stdev=2966.48 00:10:56.527 clat (usec): min=125, max=595, avg=162.40, stdev=17.73 00:10:56.527 lat (usec): min=135, max=606, avg=175.07, stdev=18.06 00:10:56.527 clat percentiles (usec): 00:10:56.527 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:10:56.527 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:10:56.527 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 192], 00:10:56.527 | 99.00th=[ 208], 99.50th=[ 217], 99.90th=[ 233], 99.95th=[ 277], 00:10:56.527 | 99.99th=[ 594] 00:10:56.527 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:56.527 slat (usec): min=12, max=162, avg=21.01, stdev= 9.57 00:10:56.527 clat (usec): min=3, max=2017, avg=126.36, stdev=38.88 00:10:56.527 lat (usec): min=105, max=2035, avg=147.37, stdev=39.36 00:10:56.527 clat percentiles (usec): 00:10:56.527 | 1.00th=[ 95], 5.00th=[ 104], 10.00th=[ 109], 20.00th=[ 114], 00:10:56.527 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 128], 00:10:56.527 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 153], 00:10:56.527 | 99.00th=[ 172], 99.50th=[ 186], 99.90th=[ 347], 99.95th=[ 506], 00:10:56.527 | 99.99th=[ 2024] 00:10:56.527 bw ( KiB/s): min=12288, max=12288, per=30.38%, avg=12288.00, stdev= 0.00, samples=1 00:10:56.527 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:56.527 lat (usec) : 4=0.03%, 50=0.08%, 100=0.86%, 250=98.89%, 500=0.08% 00:10:56.527 lat (usec) : 750=0.03% 00:10:56.527 lat (msec) : 4=0.02% 00:10:56.527 cpu : usr=2.00%, sys=8.50%, ctx=6165, majf=0, minf=5 00:10:56.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.527 issued rwts: total=3069,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.527 job1: (groupid=0, jobs=1): err= 0: pid=75335: Thu Dec 5 06:35:51 2024 00:10:56.527 read: IOPS=1908, BW=7632KiB/s (7816kB/s)(7640KiB/1001msec) 00:10:56.527 slat (nsec): min=10695, max=77049, avg=15713.13, stdev=6315.43 00:10:56.527 clat (usec): min=206, max=684, avg=262.27, stdev=39.76 00:10:56.527 lat (usec): min=225, max=703, avg=277.99, stdev=39.94 00:10:56.527 clat percentiles (usec): 00:10:56.527 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 241], 00:10:56.527 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 262], 00:10:56.527 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 306], 00:10:56.527 | 99.00th=[ 502], 99.50th=[ 545], 99.90th=[ 586], 99.95th=[ 685], 00:10:56.527 | 99.99th=[ 685] 00:10:56.527 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:56.527 slat (nsec): min=16274, max=70736, avg=22477.11, stdev=6745.81 00:10:56.527 clat (usec): min=90, max=3203, avg=202.91, stdev=84.52 00:10:56.527 lat (usec): min=109, max=3227, avg=225.39, stdev=84.36 00:10:56.527 clat percentiles (usec): 00:10:56.527 | 1.00th=[ 112], 5.00th=[ 149], 10.00th=[ 167], 20.00th=[ 180], 00:10:56.527 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:10:56.527 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 245], 00:10:56.527 | 99.00th=[ 302], 99.50th=[ 429], 99.90th=[ 1020], 99.95th=[ 1614], 00:10:56.527 | 99.99th=[ 3195] 00:10:56.527 bw ( KiB/s): min= 8192, max= 8192, per=20.25%, avg=8192.00, stdev= 0.00, samples=1 00:10:56.527 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:56.527 lat (usec) : 100=0.18%, 250=68.14%, 500=31.03%, 750=0.56%, 1000=0.03% 00:10:56.527 lat (msec) : 2=0.05%, 4=0.03% 00:10:56.527 cpu : usr=2.20%, sys=5.50%, ctx=3958, majf=0, minf=15 00:10:56.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.527 issued rwts: total=1910,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.527 job2: (groupid=0, jobs=1): err= 0: pid=75336: Thu Dec 5 06:35:51 2024 00:10:56.527 read: IOPS=1891, BW=7564KiB/s (7746kB/s)(7572KiB/1001msec) 00:10:56.527 slat (nsec): min=11446, max=39702, avg=13875.06, stdev=3222.90 00:10:56.527 clat (usec): min=173, max=528, avg=260.53, stdev=23.40 00:10:56.527 lat (usec): min=186, max=543, avg=274.41, stdev=23.56 00:10:56.527 clat percentiles (usec): 00:10:56.527 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:10:56.527 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:10:56.527 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 302], 00:10:56.527 | 99.00th=[ 343], 99.50th=[ 363], 99.90th=[ 469], 99.95th=[ 529], 00:10:56.527 | 99.99th=[ 529] 00:10:56.527 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:56.527 slat (nsec): min=16287, max=76212, avg=22065.18, stdev=6382.15 00:10:56.527 clat (usec): min=112, max=1920, avg=209.10, stdev=69.04 00:10:56.527 lat (usec): min=133, max=1940, avg=231.16, stdev=70.23 00:10:56.527 clat percentiles (usec): 00:10:56.527 | 1.00th=[ 128], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 184], 00:10:56.527 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:10:56.527 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 239], 95.00th=[ 265], 00:10:56.527 | 99.00th=[ 396], 99.50th=[ 424], 99.90th=[ 955], 99.95th=[ 1729], 00:10:56.527 | 99.99th=[ 1926] 00:10:56.527 bw ( KiB/s): min= 8192, max= 8192, per=20.25%, avg=8192.00, stdev= 0.00, samples=1 00:10:56.527 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:56.527 lat (usec) : 250=64.78%, 500=35.04%, 750=0.08%, 1000=0.05% 00:10:56.527 lat (msec) : 2=0.05% 00:10:56.527 cpu : usr=1.70%, sys=5.40%, ctx=3941, majf=0, minf=11 00:10:56.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.527 issued rwts: total=1893,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.527 job3: (groupid=0, jobs=1): err= 0: pid=75337: Thu Dec 5 06:35:51 2024 00:10:56.527 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:56.527 slat (nsec): min=10556, max=50526, avg=14184.16, stdev=3828.03 00:10:56.527 clat (usec): min=139, max=8101, avg=186.72, stdev=173.51 00:10:56.527 lat (usec): min=152, max=8114, avg=200.90, stdev=173.65 00:10:56.527 clat percentiles (usec): 00:10:56.527 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 167], 00:10:56.527 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:10:56.527 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 215], 00:10:56.527 | 99.00th=[ 235], 99.50th=[ 247], 99.90th=[ 2180], 99.95th=[ 3097], 00:10:56.527 | 99.99th=[ 8094] 00:10:56.527 write: IOPS=2951, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec); 0 zone resets 00:10:56.527 slat (nsec): min=13712, max=88580, avg=21278.53, stdev=5482.25 00:10:56.527 clat (usec): min=101, max=1023, avg=140.11, stdev=26.78 00:10:56.527 lat (usec): min=119, max=1041, avg=161.39, stdev=27.34 00:10:56.527 clat percentiles (usec): 00:10:56.527 | 1.00th=[ 113], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 128], 00:10:56.527 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 143], 00:10:56.527 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 167], 00:10:56.527 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 206], 99.95th=[ 996], 00:10:56.527 | 99.99th=[ 1020] 00:10:56.527 bw ( KiB/s): min=12288, max=12288, per=30.38%, avg=12288.00, stdev= 0.00, samples=1 00:10:56.527 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:56.527 lat (usec) : 250=99.80%, 500=0.09%, 1000=0.02% 00:10:56.527 lat (msec) : 2=0.04%, 4=0.04%, 10=0.02% 00:10:56.527 cpu : usr=1.60%, sys=8.40%, ctx=5514, majf=0, minf=19 00:10:56.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.527 issued rwts: total=2560,2954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.527 00:10:56.527 Run status group 0 (all jobs): 00:10:56.527 READ: bw=36.8MiB/s (38.6MB/s), 7564KiB/s-12.0MiB/s (7746kB/s-12.6MB/s), io=36.8MiB (38.6MB), run=1001-1001msec 00:10:56.528 WRITE: bw=39.5MiB/s (41.4MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.5MiB (41.5MB), run=1001-1001msec 00:10:56.528 00:10:56.528 Disk stats (read/write): 00:10:56.528 nvme0n1: ios=2610/2735, merge=0/0, ticks=478/379, in_queue=857, util=88.48% 00:10:56.528 nvme0n2: ios=1575/1922, merge=0/0, ticks=428/405, in_queue=833, util=88.35% 00:10:56.528 nvme0n3: ios=1536/1868, merge=0/0, ticks=406/399, in_queue=805, util=89.26% 00:10:56.528 nvme0n4: ios=2227/2560, merge=0/0, ticks=419/388, in_queue=807, util=89.50% 00:10:56.528 06:35:51 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:56.528 [global] 00:10:56.528 thread=1 00:10:56.528 invalidate=1 00:10:56.528 rw=write 00:10:56.528 time_based=1 00:10:56.528 runtime=1 00:10:56.528 ioengine=libaio 00:10:56.528 direct=1 00:10:56.528 bs=4096 00:10:56.528 iodepth=128 00:10:56.528 norandommap=0 00:10:56.528 numjobs=1 00:10:56.528 00:10:56.528 verify_dump=1 00:10:56.528 verify_backlog=512 00:10:56.528 verify_state_save=0 00:10:56.528 do_verify=1 00:10:56.528 verify=crc32c-intel 00:10:56.528 [job0] 00:10:56.528 filename=/dev/nvme0n1 00:10:56.528 [job1] 00:10:56.528 filename=/dev/nvme0n2 00:10:56.528 [job2] 00:10:56.528 filename=/dev/nvme0n3 00:10:56.528 [job3] 00:10:56.528 filename=/dev/nvme0n4 00:10:56.528 Could not set queue depth (nvme0n1) 00:10:56.528 Could not set queue depth (nvme0n2) 00:10:56.528 Could not set queue depth (nvme0n3) 00:10:56.528 Could not set queue depth (nvme0n4) 00:10:56.528 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.528 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.528 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.528 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.528 fio-3.35 00:10:56.528 Starting 4 threads 00:10:57.907 00:10:57.907 job0: (groupid=0, jobs=1): err= 0: pid=75398: Thu Dec 5 06:35:52 2024 00:10:57.907 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:10:57.907 slat (usec): min=5, max=4703, avg=89.44, stdev=373.01 00:10:57.907 clat (usec): min=8234, max=17235, avg=11976.04, stdev=1262.19 00:10:57.907 lat (usec): min=8261, max=17249, avg=12065.48, stdev=1292.13 00:10:57.907 clat percentiles (usec): 00:10:57.907 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10552], 20.00th=[10814], 00:10:57.907 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:10:57.907 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13435], 95.00th=[14353], 00:10:57.907 | 99.00th=[15401], 99.50th=[16057], 99.90th=[16188], 99.95th=[16188], 00:10:57.907 | 99.99th=[17171] 00:10:57.907 write: IOPS=5438, BW=21.2MiB/s (22.3MB/s)(21.3MiB/1003msec); 0 zone resets 00:10:57.907 slat (usec): min=9, max=6621, avg=92.21, stdev=444.36 00:10:57.907 clat (usec): min=198, max=18044, avg=11984.68, stdev=1493.67 00:10:57.907 lat (usec): min=2818, max=18088, avg=12076.90, stdev=1551.90 00:10:57.907 clat percentiles (usec): 00:10:57.907 | 1.00th=[ 6783], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:10:57.907 | 30.00th=[11207], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:10:57.907 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13435], 95.00th=[14222], 00:10:57.907 | 99.00th=[15664], 99.50th=[15926], 99.90th=[17695], 99.95th=[17695], 00:10:57.907 | 99.99th=[17957] 00:10:57.907 bw ( KiB/s): min=20480, max=22136, per=29.26%, avg=21308.00, stdev=1170.97, samples=2 00:10:57.907 iops : min= 5120, max= 5534, avg=5327.00, stdev=292.74, samples=2 00:10:57.907 lat (usec) : 250=0.01% 00:10:57.907 lat (msec) : 4=0.40%, 10=3.34%, 20=96.26% 00:10:57.907 cpu : usr=5.39%, sys=13.47%, ctx=382, majf=0, minf=17 00:10:57.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:57.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.907 issued rwts: total=5120,5455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.907 job1: (groupid=0, jobs=1): err= 0: pid=75399: Thu Dec 5 06:35:52 2024 00:10:57.907 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:10:57.907 slat (usec): min=4, max=6670, avg=112.61, stdev=570.87 00:10:57.907 clat (usec): min=8811, max=28956, avg=14846.00, stdev=4368.01 00:10:57.907 lat (usec): min=10910, max=29415, avg=14958.61, stdev=4371.87 00:10:57.907 clat percentiles (usec): 00:10:57.907 | 1.00th=[ 9896], 5.00th=[11469], 10.00th=[11731], 20.00th=[12125], 00:10:57.907 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[13042], 00:10:57.907 | 70.00th=[13304], 80.00th=[21627], 90.00th=[22676], 95.00th=[22938], 00:10:57.907 | 99.00th=[24511], 99.50th=[24773], 99.90th=[28967], 99.95th=[28967], 00:10:57.907 | 99.99th=[28967] 00:10:57.907 write: IOPS=4374, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1005msec); 0 zone resets 00:10:57.907 slat (usec): min=11, max=5430, avg=114.93, stdev=468.58 00:10:57.907 clat (usec): min=4178, max=29817, avg=15036.64, stdev=4874.98 00:10:57.907 lat (usec): min=4741, max=30063, avg=15151.57, stdev=4886.65 00:10:57.907 clat percentiles (usec): 00:10:57.907 | 1.00th=[10028], 5.00th=[11863], 10.00th=[12125], 20.00th=[12387], 00:10:57.907 | 30.00th=[12649], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:10:57.907 | 70.00th=[13173], 80.00th=[18220], 90.00th=[25035], 95.00th=[25560], 00:10:57.907 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29754], 99.95th=[29754], 00:10:57.907 | 99.99th=[29754] 00:10:57.907 bw ( KiB/s): min=13664, max=20521, per=23.47%, avg=17092.50, stdev=4848.63, samples=2 00:10:57.907 iops : min= 3416, max= 5130, avg=4273.00, stdev=1211.98, samples=2 00:10:57.907 lat (msec) : 10=1.05%, 20=78.80%, 50=20.15% 00:10:57.907 cpu : usr=4.38%, sys=11.25%, ctx=496, majf=0, minf=10 00:10:57.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:57.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.907 issued rwts: total=4096,4396,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.908 job2: (groupid=0, jobs=1): err= 0: pid=75400: Thu Dec 5 06:35:52 2024 00:10:57.908 read: IOPS=4531, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1003msec) 00:10:57.908 slat (usec): min=5, max=3989, avg=103.35, stdev=491.50 00:10:57.908 clat (usec): min=268, max=15802, avg=13654.85, stdev=1592.79 00:10:57.908 lat (usec): min=3602, max=15822, avg=13758.21, stdev=1519.11 00:10:57.908 clat percentiles (usec): 00:10:57.908 | 1.00th=[ 6849], 5.00th=[11863], 10.00th=[12256], 20.00th=[12518], 00:10:57.908 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13829], 60.00th=[14484], 00:10:57.908 | 70.00th=[14746], 80.00th=[14877], 90.00th=[15270], 95.00th=[15401], 00:10:57.908 | 99.00th=[15664], 99.50th=[15664], 99.90th=[15795], 99.95th=[15795], 00:10:57.908 | 99.99th=[15795] 00:10:57.908 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:57.908 slat (usec): min=12, max=3180, avg=107.08, stdev=465.21 00:10:57.908 clat (usec): min=9402, max=15869, avg=13993.35, stdev=1212.59 00:10:57.908 lat (usec): min=11786, max=15899, avg=14100.43, stdev=1129.60 00:10:57.908 clat percentiles (usec): 00:10:57.908 | 1.00th=[10814], 5.00th=[11994], 10.00th=[12387], 20.00th=[12649], 00:10:57.908 | 30.00th=[13042], 40.00th=[13829], 50.00th=[14353], 60.00th=[14615], 00:10:57.908 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15401], 95.00th=[15533], 00:10:57.908 | 99.00th=[15795], 99.50th=[15795], 99.90th=[15795], 99.95th=[15795], 00:10:57.908 | 99.99th=[15926] 00:10:57.908 bw ( KiB/s): min=17194, max=19704, per=25.34%, avg=18449.00, stdev=1774.84, samples=2 00:10:57.908 iops : min= 4298, max= 4926, avg=4612.00, stdev=444.06, samples=2 00:10:57.908 lat (usec) : 500=0.01% 00:10:57.908 lat (msec) : 4=0.26%, 10=0.92%, 20=98.81% 00:10:57.908 cpu : usr=4.39%, sys=13.27%, ctx=288, majf=0, minf=13 00:10:57.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:57.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.908 issued rwts: total=4545,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.908 job3: (groupid=0, jobs=1): err= 0: pid=75401: Thu Dec 5 06:35:52 2024 00:10:57.908 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:10:57.908 slat (usec): min=5, max=9496, avg=132.31, stdev=650.80 00:10:57.908 clat (usec): min=10678, max=28518, avg=17136.81, stdev=3679.26 00:10:57.908 lat (usec): min=13640, max=28584, avg=17269.11, stdev=3660.68 00:10:57.908 clat percentiles (usec): 00:10:57.908 | 1.00th=[11731], 5.00th=[13960], 10.00th=[14091], 20.00th=[14484], 00:10:57.908 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15139], 60.00th=[15401], 00:10:57.908 | 70.00th=[19792], 80.00th=[22152], 90.00th=[22676], 95.00th=[23462], 00:10:57.908 | 99.00th=[24773], 99.50th=[25297], 99.90th=[28443], 99.95th=[28443], 00:10:57.908 | 99.99th=[28443] 00:10:57.908 write: IOPS=3817, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1005msec); 0 zone resets 00:10:57.908 slat (usec): min=10, max=6206, avg=129.89, stdev=523.23 00:10:57.908 clat (usec): min=2960, max=28276, avg=17056.49, stdev=4252.63 00:10:57.908 lat (usec): min=5661, max=28293, avg=17186.37, stdev=4248.94 00:10:57.908 clat percentiles (usec): 00:10:57.908 | 1.00th=[11338], 5.00th=[13829], 10.00th=[14091], 20.00th=[14484], 00:10:57.908 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:10:57.908 | 70.00th=[15533], 80.00th=[22152], 90.00th=[25035], 95.00th=[25560], 00:10:57.908 | 99.00th=[27132], 99.50th=[27395], 99.90th=[27657], 99.95th=[27657], 00:10:57.908 | 99.99th=[28181] 00:10:57.908 bw ( KiB/s): min=12512, max=17194, per=20.40%, avg=14853.00, stdev=3310.67, samples=2 00:10:57.908 iops : min= 3128, max= 4298, avg=3713.00, stdev=827.31, samples=2 00:10:57.908 lat (msec) : 4=0.01%, 10=0.43%, 20=73.12%, 50=26.44% 00:10:57.908 cpu : usr=3.39%, sys=10.26%, ctx=521, majf=0, minf=13 00:10:57.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:57.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.908 issued rwts: total=3584,3837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.908 00:10:57.908 Run status group 0 (all jobs): 00:10:57.908 READ: bw=67.4MiB/s (70.7MB/s), 13.9MiB/s-19.9MiB/s (14.6MB/s-20.9MB/s), io=67.8MiB (71.0MB), run=1003-1005msec 00:10:57.908 WRITE: bw=71.1MiB/s (74.6MB/s), 14.9MiB/s-21.2MiB/s (15.6MB/s-22.3MB/s), io=71.5MiB (74.9MB), run=1003-1005msec 00:10:57.908 00:10:57.908 Disk stats (read/write): 00:10:57.908 nvme0n1: ios=4404/4608, merge=0/0, ticks=16691/15705, in_queue=32396, util=88.58% 00:10:57.908 nvme0n2: ios=3657/4096, merge=0/0, ticks=11090/13156, in_queue=24246, util=88.99% 00:10:57.908 nvme0n3: ios=3648/4096, merge=0/0, ticks=11505/12843, in_queue=24348, util=89.29% 00:10:57.908 nvme0n4: ios=3072/3584, merge=0/0, ticks=11286/13133, in_queue=24419, util=89.13% 00:10:57.908 06:35:52 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:57.908 [global] 00:10:57.908 thread=1 00:10:57.908 invalidate=1 00:10:57.908 rw=randwrite 00:10:57.908 time_based=1 00:10:57.908 runtime=1 00:10:57.908 ioengine=libaio 00:10:57.908 direct=1 00:10:57.908 bs=4096 00:10:57.908 iodepth=128 00:10:57.908 norandommap=0 00:10:57.908 numjobs=1 00:10:57.908 00:10:57.908 verify_dump=1 00:10:57.908 verify_backlog=512 00:10:57.908 verify_state_save=0 00:10:57.908 do_verify=1 00:10:57.908 verify=crc32c-intel 00:10:57.908 [job0] 00:10:57.908 filename=/dev/nvme0n1 00:10:57.908 [job1] 00:10:57.908 filename=/dev/nvme0n2 00:10:57.908 [job2] 00:10:57.908 filename=/dev/nvme0n3 00:10:57.908 [job3] 00:10:57.908 filename=/dev/nvme0n4 00:10:57.908 Could not set queue depth (nvme0n1) 00:10:57.908 Could not set queue depth (nvme0n2) 00:10:57.908 Could not set queue depth (nvme0n3) 00:10:57.908 Could not set queue depth (nvme0n4) 00:10:57.908 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.908 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.908 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.908 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.908 fio-3.35 00:10:57.908 Starting 4 threads 00:10:58.846 00:10:58.846 job0: (groupid=0, jobs=1): err= 0: pid=75454: Thu Dec 5 06:35:54 2024 00:10:58.846 read: IOPS=4850, BW=18.9MiB/s (19.9MB/s)(19.0MiB/1003msec) 00:10:58.846 slat (usec): min=5, max=9289, avg=94.88, stdev=462.23 00:10:58.846 clat (usec): min=164, max=19147, avg=12506.62, stdev=1560.26 00:10:58.846 lat (usec): min=1997, max=19161, avg=12601.50, stdev=1496.53 00:10:58.846 clat percentiles (usec): 00:10:58.846 | 1.00th=[ 5276], 5.00th=[11338], 10.00th=[11863], 20.00th=[12125], 00:10:58.846 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:10:58.846 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13173], 95.00th=[13698], 00:10:58.846 | 99.00th=[18220], 99.50th=[19006], 99.90th=[19006], 99.95th=[19006], 00:10:58.846 | 99.99th=[19268] 00:10:58.846 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:58.846 slat (usec): min=9, max=7129, avg=97.87, stdev=428.90 00:10:58.846 clat (usec): min=9239, max=17349, avg=12845.66, stdev=994.92 00:10:58.846 lat (usec): min=10255, max=17373, avg=12943.53, stdev=904.78 00:10:58.846 clat percentiles (usec): 00:10:58.846 | 1.00th=[10290], 5.00th=[11731], 10.00th=[11994], 20.00th=[12387], 00:10:58.846 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:10:58.846 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13435], 95.00th=[14222], 00:10:58.846 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:10:58.846 | 99.99th=[17433] 00:10:58.846 bw ( KiB/s): min=20480, max=20480, per=26.39%, avg=20480.00, stdev= 0.00, samples=2 00:10:58.846 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:58.846 lat (usec) : 250=0.01% 00:10:58.846 lat (msec) : 4=0.32%, 10=1.58%, 20=98.09% 00:10:58.846 cpu : usr=4.79%, sys=13.07%, ctx=314, majf=0, minf=1 00:10:58.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:58.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.846 issued rwts: total=4865,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.846 job1: (groupid=0, jobs=1): err= 0: pid=75455: Thu Dec 5 06:35:54 2024 00:10:58.846 read: IOPS=4989, BW=19.5MiB/s (20.4MB/s)(19.5MiB/1001msec) 00:10:58.846 slat (usec): min=8, max=5236, avg=93.31, stdev=441.88 00:10:58.846 clat (usec): min=219, max=14758, avg=12319.72, stdev=1165.10 00:10:58.846 lat (usec): min=2718, max=15456, avg=12413.02, stdev=1080.73 00:10:58.846 clat percentiles (usec): 00:10:58.846 | 1.00th=[ 6259], 5.00th=[11338], 10.00th=[11731], 20.00th=[11994], 00:10:58.846 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:10:58.846 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13042], 95.00th=[13304], 00:10:58.846 | 99.00th=[14484], 99.50th=[14746], 99.90th=[14746], 99.95th=[14746], 00:10:58.846 | 99.99th=[14746] 00:10:58.846 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:10:58.846 slat (usec): min=11, max=2852, avg=96.61, stdev=416.35 00:10:58.846 clat (usec): min=9411, max=14208, avg=12682.94, stdev=631.77 00:10:58.846 lat (usec): min=10044, max=15773, avg=12779.55, stdev=491.51 00:10:58.846 clat percentiles (usec): 00:10:58.846 | 1.00th=[10159], 5.00th=[11863], 10.00th=[12125], 20.00th=[12256], 00:10:58.846 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12649], 60.00th=[12780], 00:10:58.846 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13435], 95.00th=[13566], 00:10:58.846 | 99.00th=[14091], 99.50th=[14091], 99.90th=[14222], 99.95th=[14222], 00:10:58.846 | 99.99th=[14222] 00:10:58.846 bw ( KiB/s): min=20480, max=20480, per=26.39%, avg=20480.00, stdev= 0.00, samples=1 00:10:58.846 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:58.846 lat (usec) : 250=0.01% 00:10:58.846 lat (msec) : 4=0.32%, 10=1.69%, 20=97.98% 00:10:58.846 cpu : usr=4.10%, sys=13.90%, ctx=326, majf=0, minf=4 00:10:58.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:58.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.846 issued rwts: total=4994,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.846 job2: (groupid=0, jobs=1): err= 0: pid=75456: Thu Dec 5 06:35:54 2024 00:10:58.846 read: IOPS=4217, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1002msec) 00:10:58.846 slat (usec): min=5, max=3414, avg=106.10, stdev=507.09 00:10:58.846 clat (usec): min=257, max=15839, avg=13992.73, stdev=1423.11 00:10:58.846 lat (usec): min=3477, max=17868, avg=14098.83, stdev=1333.28 00:10:58.846 clat percentiles (usec): 00:10:58.846 | 1.00th=[ 7308], 5.00th=[12125], 10.00th=[13042], 20.00th=[13435], 00:10:58.846 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:10:58.846 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15139], 95.00th=[15533], 00:10:58.846 | 99.00th=[15795], 99.50th=[15795], 99.90th=[15795], 99.95th=[15795], 00:10:58.846 | 99.99th=[15795] 00:10:58.846 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:10:58.846 slat (usec): min=10, max=4908, avg=111.77, stdev=488.71 00:10:58.846 clat (usec): min=9490, max=17841, avg=14585.11, stdev=976.57 00:10:58.846 lat (usec): min=11520, max=18110, avg=14696.88, stdev=857.16 00:10:58.846 clat percentiles (usec): 00:10:58.846 | 1.00th=[11600], 5.00th=[12780], 10.00th=[13566], 20.00th=[13960], 00:10:58.846 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:10:58.846 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15664], 95.00th=[16057], 00:10:58.846 | 99.00th=[17171], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:10:58.846 | 99.99th=[17957] 00:10:58.846 bw ( KiB/s): min=17603, max=19296, per=23.78%, avg=18449.50, stdev=1197.13, samples=2 00:10:58.846 iops : min= 4400, max= 4824, avg=4612.00, stdev=299.81, samples=2 00:10:58.846 lat (usec) : 500=0.01% 00:10:58.846 lat (msec) : 4=0.36%, 10=0.38%, 20=99.24% 00:10:58.846 cpu : usr=3.90%, sys=13.19%, ctx=287, majf=0, minf=5 00:10:58.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:58.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.846 issued rwts: total=4226,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.846 job3: (groupid=0, jobs=1): err= 0: pid=75457: Thu Dec 5 06:35:54 2024 00:10:58.846 read: IOPS=4312, BW=16.8MiB/s (17.7MB/s)(16.9MiB/1002msec) 00:10:58.846 slat (usec): min=7, max=4044, avg=106.24, stdev=510.28 00:10:58.846 clat (usec): min=242, max=15813, avg=13903.83, stdev=1419.74 00:10:58.846 lat (usec): min=3312, max=16338, avg=14010.07, stdev=1328.53 00:10:58.846 clat percentiles (usec): 00:10:58.846 | 1.00th=[ 7046], 5.00th=[12125], 10.00th=[13042], 20.00th=[13304], 00:10:58.846 | 30.00th=[13698], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:10:58.846 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15008], 95.00th=[15401], 00:10:58.846 | 99.00th=[15664], 99.50th=[15795], 99.90th=[15795], 99.95th=[15795], 00:10:58.846 | 99.99th=[15795] 00:10:58.846 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:10:58.846 slat (usec): min=9, max=4325, avg=109.75, stdev=485.92 00:10:58.846 clat (usec): min=10403, max=16415, avg=14383.71, stdev=809.70 00:10:58.846 lat (usec): min=12178, max=17116, avg=14493.46, stdev=659.99 00:10:58.846 clat percentiles (usec): 00:10:58.846 | 1.00th=[11469], 5.00th=[13173], 10.00th=[13566], 20.00th=[13829], 00:10:58.846 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14353], 60.00th=[14484], 00:10:58.846 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15401], 95.00th=[15664], 00:10:58.846 | 99.00th=[16057], 99.50th=[16057], 99.90th=[16319], 99.95th=[16319], 00:10:58.846 | 99.99th=[16450] 00:10:58.846 bw ( KiB/s): min=17963, max=18936, per=23.78%, avg=18449.50, stdev=688.01, samples=2 00:10:58.846 iops : min= 4490, max= 4734, avg=4612.00, stdev=172.53, samples=2 00:10:58.846 lat (usec) : 250=0.01% 00:10:58.846 lat (msec) : 4=0.36%, 10=0.39%, 20=99.24% 00:10:58.846 cpu : usr=5.00%, sys=11.49%, ctx=283, majf=0, minf=7 00:10:58.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:58.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.846 issued rwts: total=4321,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.846 00:10:58.846 Run status group 0 (all jobs): 00:10:58.846 READ: bw=71.7MiB/s (75.2MB/s), 16.5MiB/s-19.5MiB/s (17.3MB/s-20.4MB/s), io=71.9MiB (75.4MB), run=1001-1003msec 00:10:58.846 WRITE: bw=75.8MiB/s (79.5MB/s), 18.0MiB/s-20.0MiB/s (18.8MB/s-20.9MB/s), io=76.0MiB (79.7MB), run=1001-1003msec 00:10:58.846 00:10:58.846 Disk stats (read/write): 00:10:58.846 nvme0n1: ios=4146/4480, merge=0/0, ticks=11396/12324, in_queue=23720, util=87.47% 00:10:58.846 nvme0n2: ios=4172/4608, merge=0/0, ticks=11316/12827, in_queue=24143, util=88.78% 00:10:58.846 nvme0n3: ios=3584/4000, merge=0/0, ticks=11505/12871, in_queue=24376, util=88.98% 00:10:58.846 nvme0n4: ios=3584/4064, merge=0/0, ticks=11507/12908, in_queue=24415, util=89.63% 00:10:59.106 06:35:54 -- target/fio.sh@55 -- # sync 00:10:59.106 06:35:54 -- target/fio.sh@59 -- # fio_pid=75470 00:10:59.106 06:35:54 -- target/fio.sh@61 -- # sleep 3 00:10:59.106 06:35:54 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:59.106 [global] 00:10:59.106 thread=1 00:10:59.106 invalidate=1 00:10:59.106 rw=read 00:10:59.106 time_based=1 00:10:59.106 runtime=10 00:10:59.106 ioengine=libaio 00:10:59.106 direct=1 00:10:59.106 bs=4096 00:10:59.106 iodepth=1 00:10:59.106 norandommap=1 00:10:59.106 numjobs=1 00:10:59.106 00:10:59.106 [job0] 00:10:59.106 filename=/dev/nvme0n1 00:10:59.106 [job1] 00:10:59.106 filename=/dev/nvme0n2 00:10:59.106 [job2] 00:10:59.106 filename=/dev/nvme0n3 00:10:59.106 [job3] 00:10:59.106 filename=/dev/nvme0n4 00:10:59.106 Could not set queue depth (nvme0n1) 00:10:59.106 Could not set queue depth (nvme0n2) 00:10:59.106 Could not set queue depth (nvme0n3) 00:10:59.106 Could not set queue depth (nvme0n4) 00:10:59.106 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.106 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.106 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.106 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.106 fio-3.35 00:10:59.106 Starting 4 threads 00:11:02.392 06:35:57 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:02.392 fio: pid=75513, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:02.392 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=63676416, buflen=4096 00:11:02.392 06:35:57 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:02.651 fio: pid=75512, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:02.651 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=47300608, buflen=4096 00:11:02.651 06:35:57 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.651 06:35:57 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:02.910 fio: pid=75510, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:02.910 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=8437760, buflen=4096 00:11:02.910 06:35:58 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.910 06:35:58 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:03.169 fio: pid=75511, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:03.169 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=57376768, buflen=4096 00:11:03.169 06:35:58 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.169 06:35:58 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:03.169 00:11:03.169 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75510: Thu Dec 5 06:35:58 2024 00:11:03.169 read: IOPS=5355, BW=20.9MiB/s (21.9MB/s)(72.0MiB/3444msec) 00:11:03.169 slat (usec): min=7, max=14321, avg=15.84, stdev=183.65 00:11:03.169 clat (usec): min=114, max=7780, avg=169.72, stdev=104.90 00:11:03.169 lat (usec): min=124, max=14561, avg=185.56, stdev=213.71 00:11:03.169 clat percentiles (usec): 00:11:03.169 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:11:03.169 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:11:03.169 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 210], 00:11:03.169 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 971], 99.95th=[ 1893], 00:11:03.169 | 99.99th=[ 7701] 00:11:03.169 bw ( KiB/s): min=21434, max=22936, per=34.54%, avg=22149.67, stdev=648.39, samples=6 00:11:03.169 iops : min= 5358, max= 5734, avg=5537.33, stdev=162.21, samples=6 00:11:03.169 lat (usec) : 250=96.63%, 500=3.21%, 750=0.03%, 1000=0.03% 00:11:03.169 lat (msec) : 2=0.05%, 4=0.04%, 10=0.01% 00:11:03.169 cpu : usr=1.31%, sys=6.27%, ctx=18450, majf=0, minf=1 00:11:03.169 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.169 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.169 issued rwts: total=18445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.169 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.169 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75511: Thu Dec 5 06:35:58 2024 00:11:03.169 read: IOPS=3771, BW=14.7MiB/s (15.4MB/s)(54.7MiB/3714msec) 00:11:03.169 slat (usec): min=9, max=16535, avg=18.27, stdev=217.60 00:11:03.169 clat (usec): min=4, max=3975, avg=245.42, stdev=76.80 00:11:03.169 lat (usec): min=126, max=16799, avg=263.69, stdev=245.75 00:11:03.169 clat percentiles (usec): 00:11:03.169 | 1.00th=[ 127], 5.00th=[ 139], 10.00th=[ 151], 20.00th=[ 215], 00:11:03.169 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 260], 00:11:03.169 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 314], 00:11:03.169 | 99.00th=[ 412], 99.50th=[ 437], 99.90th=[ 889], 99.95th=[ 1369], 00:11:03.169 | 99.99th=[ 2606] 00:11:03.169 bw ( KiB/s): min=13712, max=17882, per=23.03%, avg=14767.14, stdev=1430.18, samples=7 00:11:03.169 iops : min= 3428, max= 4470, avg=3691.71, stdev=357.36, samples=7 00:11:03.169 lat (usec) : 10=0.01%, 250=47.07%, 500=52.62%, 750=0.16%, 1000=0.04% 00:11:03.169 lat (msec) : 2=0.07%, 4=0.01% 00:11:03.169 cpu : usr=1.24%, sys=4.63%, ctx=14018, majf=0, minf=2 00:11:03.169 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.169 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.169 issued rwts: total=14009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.169 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.169 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75512: Thu Dec 5 06:35:58 2024 00:11:03.169 read: IOPS=3622, BW=14.1MiB/s (14.8MB/s)(45.1MiB/3188msec) 00:11:03.169 slat (usec): min=7, max=11641, avg=15.92, stdev=150.62 00:11:03.169 clat (usec): min=142, max=2619, avg=258.92, stdev=55.93 00:11:03.169 lat (usec): min=154, max=11949, avg=274.84, stdev=161.57 00:11:03.169 clat percentiles (usec): 00:11:03.169 | 1.00th=[ 180], 5.00th=[ 219], 10.00th=[ 229], 20.00th=[ 239], 00:11:03.169 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:11:03.169 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 306], 00:11:03.169 | 99.00th=[ 334], 99.50th=[ 392], 99.90th=[ 881], 99.95th=[ 979], 00:11:03.169 | 99.99th=[ 2540] 00:11:03.169 bw ( KiB/s): min=14552, max=14832, per=22.90%, avg=14688.00, stdev=105.28, samples=6 00:11:03.169 iops : min= 3638, max= 3708, avg=3672.33, stdev=26.03, samples=6 00:11:03.169 lat (usec) : 250=40.77%, 500=58.96%, 750=0.15%, 1000=0.08% 00:11:03.169 lat (msec) : 2=0.01%, 4=0.03% 00:11:03.169 cpu : usr=0.85%, sys=4.52%, ctx=11553, majf=0, minf=2 00:11:03.169 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.169 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.169 issued rwts: total=11549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.169 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.169 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75513: Thu Dec 5 06:35:58 2024 00:11:03.169 read: IOPS=5289, BW=20.7MiB/s (21.7MB/s)(60.7MiB/2939msec) 00:11:03.169 slat (nsec): min=10040, max=72357, avg=12700.46, stdev=3327.27 00:11:03.169 clat (usec): min=132, max=1739, avg=175.34, stdev=22.09 00:11:03.169 lat (usec): min=143, max=1751, avg=188.04, stdev=22.30 00:11:03.169 clat percentiles (usec): 00:11:03.169 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:11:03.169 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:11:03.169 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 206], 00:11:03.170 | 99.00th=[ 221], 99.50th=[ 229], 99.90th=[ 245], 99.95th=[ 262], 00:11:03.170 | 99.99th=[ 783] 00:11:03.170 bw ( KiB/s): min=20718, max=21344, per=32.96%, avg=21140.40, stdev=253.79, samples=5 00:11:03.170 iops : min= 5179, max= 5336, avg=5285.00, stdev=63.66, samples=5 00:11:03.170 lat (usec) : 250=99.92%, 500=0.04%, 750=0.02%, 1000=0.01% 00:11:03.170 lat (msec) : 2=0.01% 00:11:03.170 cpu : usr=1.40%, sys=5.85%, ctx=15547, majf=0, minf=1 00:11:03.170 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.170 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.170 issued rwts: total=15547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.170 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.170 00:11:03.170 Run status group 0 (all jobs): 00:11:03.170 READ: bw=62.6MiB/s (65.7MB/s), 14.1MiB/s-20.9MiB/s (14.8MB/s-21.9MB/s), io=233MiB (244MB), run=2939-3714msec 00:11:03.170 00:11:03.170 Disk stats (read/write): 00:11:03.170 nvme0n1: ios=17984/0, merge=0/0, ticks=3134/0, in_queue=3134, util=94.53% 00:11:03.170 nvme0n2: ios=13422/0, merge=0/0, ticks=3416/0, in_queue=3416, util=95.40% 00:11:03.170 nvme0n3: ios=11327/0, merge=0/0, ticks=2988/0, in_queue=2988, util=96.21% 00:11:03.170 nvme0n4: ios=15166/0, merge=0/0, ticks=2765/0, in_queue=2765, util=96.76% 00:11:03.428 06:35:58 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.428 06:35:58 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:03.686 06:35:58 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.686 06:35:58 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:03.945 06:35:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.945 06:35:59 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:04.204 06:35:59 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:04.204 06:35:59 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:04.462 06:35:59 -- target/fio.sh@69 -- # fio_status=0 00:11:04.462 06:35:59 -- target/fio.sh@70 -- # wait 75470 00:11:04.462 06:35:59 -- target/fio.sh@70 -- # fio_status=4 00:11:04.462 06:35:59 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:04.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.462 06:35:59 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:04.462 06:35:59 -- common/autotest_common.sh@1208 -- # local i=0 00:11:04.462 06:35:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:04.462 06:35:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.462 06:35:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:04.462 06:35:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.462 nvmf hotplug test: fio failed as expected 00:11:04.462 06:35:59 -- common/autotest_common.sh@1220 -- # return 0 00:11:04.462 06:35:59 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:04.462 06:35:59 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:04.462 06:35:59 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.721 06:36:00 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:04.721 06:36:00 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:04.721 06:36:00 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:04.721 06:36:00 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:04.721 06:36:00 -- target/fio.sh@91 -- # nvmftestfini 00:11:04.721 06:36:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:04.721 06:36:00 -- nvmf/common.sh@116 -- # sync 00:11:04.721 06:36:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:04.721 06:36:00 -- nvmf/common.sh@119 -- # set +e 00:11:04.721 06:36:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:04.721 06:36:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:04.721 rmmod nvme_tcp 00:11:04.721 rmmod nvme_fabrics 00:11:04.721 rmmod nvme_keyring 00:11:04.721 06:36:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:04.721 06:36:00 -- nvmf/common.sh@123 -- # set -e 00:11:04.721 06:36:00 -- nvmf/common.sh@124 -- # return 0 00:11:04.721 06:36:00 -- nvmf/common.sh@477 -- # '[' -n 75088 ']' 00:11:04.721 06:36:00 -- nvmf/common.sh@478 -- # killprocess 75088 00:11:04.721 06:36:00 -- common/autotest_common.sh@936 -- # '[' -z 75088 ']' 00:11:04.721 06:36:00 -- common/autotest_common.sh@940 -- # kill -0 75088 00:11:04.721 06:36:00 -- common/autotest_common.sh@941 -- # uname 00:11:04.721 06:36:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:04.721 06:36:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75088 00:11:04.721 killing process with pid 75088 00:11:04.721 06:36:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:04.721 06:36:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:04.721 06:36:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75088' 00:11:04.721 06:36:00 -- common/autotest_common.sh@955 -- # kill 75088 00:11:04.721 06:36:00 -- common/autotest_common.sh@960 -- # wait 75088 00:11:04.980 06:36:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:04.980 06:36:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:04.980 06:36:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:04.980 06:36:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.980 06:36:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:04.980 06:36:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.980 06:36:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.980 06:36:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.980 06:36:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:04.980 00:11:04.980 real 0m19.454s 00:11:04.980 user 1m13.382s 00:11:04.980 sys 0m10.558s 00:11:04.980 06:36:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:04.980 ************************************ 00:11:04.980 END TEST nvmf_fio_target 00:11:04.980 06:36:00 -- common/autotest_common.sh@10 -- # set +x 00:11:04.980 ************************************ 00:11:04.980 06:36:00 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:04.980 06:36:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:04.980 06:36:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:04.980 06:36:00 -- common/autotest_common.sh@10 -- # set +x 00:11:04.980 ************************************ 00:11:04.980 START TEST nvmf_bdevio 00:11:04.980 ************************************ 00:11:04.980 06:36:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:05.239 * Looking for test storage... 00:11:05.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:05.239 06:36:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:05.240 06:36:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:05.240 06:36:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:05.240 06:36:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:05.240 06:36:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:05.240 06:36:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:05.240 06:36:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:05.240 06:36:00 -- scripts/common.sh@335 -- # IFS=.-: 00:11:05.240 06:36:00 -- scripts/common.sh@335 -- # read -ra ver1 00:11:05.240 06:36:00 -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.240 06:36:00 -- scripts/common.sh@336 -- # read -ra ver2 00:11:05.240 06:36:00 -- scripts/common.sh@337 -- # local 'op=<' 00:11:05.240 06:36:00 -- scripts/common.sh@339 -- # ver1_l=2 00:11:05.240 06:36:00 -- scripts/common.sh@340 -- # ver2_l=1 00:11:05.240 06:36:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:05.240 06:36:00 -- scripts/common.sh@343 -- # case "$op" in 00:11:05.240 06:36:00 -- scripts/common.sh@344 -- # : 1 00:11:05.240 06:36:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:05.240 06:36:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.240 06:36:00 -- scripts/common.sh@364 -- # decimal 1 00:11:05.240 06:36:00 -- scripts/common.sh@352 -- # local d=1 00:11:05.240 06:36:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.240 06:36:00 -- scripts/common.sh@354 -- # echo 1 00:11:05.240 06:36:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:05.240 06:36:00 -- scripts/common.sh@365 -- # decimal 2 00:11:05.240 06:36:00 -- scripts/common.sh@352 -- # local d=2 00:11:05.240 06:36:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.240 06:36:00 -- scripts/common.sh@354 -- # echo 2 00:11:05.240 06:36:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:05.240 06:36:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:05.240 06:36:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:05.240 06:36:00 -- scripts/common.sh@367 -- # return 0 00:11:05.240 06:36:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.240 06:36:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:05.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.240 --rc genhtml_branch_coverage=1 00:11:05.240 --rc genhtml_function_coverage=1 00:11:05.240 --rc genhtml_legend=1 00:11:05.240 --rc geninfo_all_blocks=1 00:11:05.240 --rc geninfo_unexecuted_blocks=1 00:11:05.240 00:11:05.240 ' 00:11:05.240 06:36:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:05.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.240 --rc genhtml_branch_coverage=1 00:11:05.240 --rc genhtml_function_coverage=1 00:11:05.240 --rc genhtml_legend=1 00:11:05.240 --rc geninfo_all_blocks=1 00:11:05.240 --rc geninfo_unexecuted_blocks=1 00:11:05.240 00:11:05.240 ' 00:11:05.240 06:36:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:05.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.240 --rc genhtml_branch_coverage=1 00:11:05.240 --rc genhtml_function_coverage=1 00:11:05.240 --rc genhtml_legend=1 00:11:05.240 --rc geninfo_all_blocks=1 00:11:05.240 --rc geninfo_unexecuted_blocks=1 00:11:05.240 00:11:05.240 ' 00:11:05.240 06:36:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:05.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.240 --rc genhtml_branch_coverage=1 00:11:05.240 --rc genhtml_function_coverage=1 00:11:05.240 --rc genhtml_legend=1 00:11:05.240 --rc geninfo_all_blocks=1 00:11:05.240 --rc geninfo_unexecuted_blocks=1 00:11:05.240 00:11:05.240 ' 00:11:05.240 06:36:00 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:05.240 06:36:00 -- nvmf/common.sh@7 -- # uname -s 00:11:05.240 06:36:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.240 06:36:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.240 06:36:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.240 06:36:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.240 06:36:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.240 06:36:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.240 06:36:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.240 06:36:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.240 06:36:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.240 06:36:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.240 06:36:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:11:05.240 06:36:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:11:05.240 06:36:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.240 06:36:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.240 06:36:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:05.240 06:36:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:05.240 06:36:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.240 06:36:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.240 06:36:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.240 06:36:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.240 06:36:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.240 06:36:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.240 06:36:00 -- paths/export.sh@5 -- # export PATH 00:11:05.240 06:36:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.240 06:36:00 -- nvmf/common.sh@46 -- # : 0 00:11:05.240 06:36:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:05.240 06:36:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:05.240 06:36:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:05.240 06:36:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.240 06:36:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.240 06:36:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:05.240 06:36:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:05.240 06:36:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:05.240 06:36:00 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:05.240 06:36:00 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:05.240 06:36:00 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:05.240 06:36:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:05.240 06:36:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.240 06:36:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:05.240 06:36:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:05.240 06:36:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:05.241 06:36:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.241 06:36:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.241 06:36:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.241 06:36:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:05.241 06:36:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:05.241 06:36:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:05.241 06:36:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:05.241 06:36:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:05.241 06:36:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:05.241 06:36:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.241 06:36:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.241 06:36:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:05.241 06:36:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:05.241 06:36:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:05.241 06:36:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:05.241 06:36:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:05.241 06:36:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.241 06:36:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:05.241 06:36:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:05.241 06:36:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:05.241 06:36:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:05.241 06:36:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:05.241 06:36:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:05.241 Cannot find device "nvmf_tgt_br" 00:11:05.241 06:36:00 -- nvmf/common.sh@154 -- # true 00:11:05.241 06:36:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:05.241 Cannot find device "nvmf_tgt_br2" 00:11:05.241 06:36:00 -- nvmf/common.sh@155 -- # true 00:11:05.241 06:36:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:05.241 06:36:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:05.241 Cannot find device "nvmf_tgt_br" 00:11:05.241 06:36:00 -- nvmf/common.sh@157 -- # true 00:11:05.241 06:36:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:05.241 Cannot find device "nvmf_tgt_br2" 00:11:05.241 06:36:00 -- nvmf/common.sh@158 -- # true 00:11:05.241 06:36:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:05.241 06:36:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:05.241 06:36:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:05.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:05.241 06:36:00 -- nvmf/common.sh@161 -- # true 00:11:05.241 06:36:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:05.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:05.500 06:36:00 -- nvmf/common.sh@162 -- # true 00:11:05.500 06:36:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:05.500 06:36:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:05.500 06:36:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:05.500 06:36:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:05.500 06:36:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:05.500 06:36:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:05.500 06:36:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:05.500 06:36:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:05.500 06:36:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:05.500 06:36:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:05.500 06:36:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:05.500 06:36:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:05.500 06:36:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:05.500 06:36:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:05.500 06:36:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:05.500 06:36:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:05.500 06:36:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:05.500 06:36:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:05.500 06:36:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:05.500 06:36:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:05.500 06:36:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:05.500 06:36:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:05.500 06:36:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:05.500 06:36:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:05.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:11:05.500 00:11:05.500 --- 10.0.0.2 ping statistics --- 00:11:05.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.500 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:05.500 06:36:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:05.500 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:05.500 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:11:05.500 00:11:05.500 --- 10.0.0.3 ping statistics --- 00:11:05.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.500 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:05.500 06:36:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:05.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:05.500 00:11:05.500 --- 10.0.0.1 ping statistics --- 00:11:05.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.500 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:05.500 06:36:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.500 06:36:00 -- nvmf/common.sh@421 -- # return 0 00:11:05.500 06:36:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:05.500 06:36:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.500 06:36:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:05.500 06:36:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:05.500 06:36:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.500 06:36:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:05.500 06:36:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:05.500 06:36:00 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:05.500 06:36:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:05.500 06:36:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:05.500 06:36:00 -- common/autotest_common.sh@10 -- # set +x 00:11:05.500 06:36:00 -- nvmf/common.sh@469 -- # nvmfpid=75786 00:11:05.500 06:36:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:05.500 06:36:00 -- nvmf/common.sh@470 -- # waitforlisten 75786 00:11:05.500 06:36:00 -- common/autotest_common.sh@829 -- # '[' -z 75786 ']' 00:11:05.500 06:36:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.500 06:36:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:05.500 06:36:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.500 06:36:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:05.500 06:36:00 -- common/autotest_common.sh@10 -- # set +x 00:11:05.500 [2024-12-05 06:36:00.946054] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:05.500 [2024-12-05 06:36:00.946147] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.759 [2024-12-05 06:36:01.086189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.759 [2024-12-05 06:36:01.118227] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:05.759 [2024-12-05 06:36:01.118382] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.759 [2024-12-05 06:36:01.118395] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.759 [2024-12-05 06:36:01.118403] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.759 [2024-12-05 06:36:01.119304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:05.759 [2024-12-05 06:36:01.119498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:05.759 [2024-12-05 06:36:01.119563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.759 [2024-12-05 06:36:01.119551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:06.695 06:36:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.695 06:36:01 -- common/autotest_common.sh@862 -- # return 0 00:11:06.695 06:36:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:06.695 06:36:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:06.695 06:36:01 -- common/autotest_common.sh@10 -- # set +x 00:11:06.695 06:36:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.695 06:36:02 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.695 06:36:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.695 06:36:02 -- common/autotest_common.sh@10 -- # set +x 00:11:06.695 [2024-12-05 06:36:02.010293] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.695 06:36:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.695 06:36:02 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:06.695 06:36:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.695 06:36:02 -- common/autotest_common.sh@10 -- # set +x 00:11:06.695 Malloc0 00:11:06.695 06:36:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.695 06:36:02 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:06.695 06:36:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.695 06:36:02 -- common/autotest_common.sh@10 -- # set +x 00:11:06.695 06:36:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.695 06:36:02 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:06.695 06:36:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.695 06:36:02 -- common/autotest_common.sh@10 -- # set +x 00:11:06.695 06:36:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.695 06:36:02 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.695 06:36:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.695 06:36:02 -- common/autotest_common.sh@10 -- # set +x 00:11:06.695 [2024-12-05 06:36:02.072313] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.695 06:36:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.695 06:36:02 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:06.695 06:36:02 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:06.695 06:36:02 -- nvmf/common.sh@520 -- # config=() 00:11:06.695 06:36:02 -- nvmf/common.sh@520 -- # local subsystem config 00:11:06.695 06:36:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:06.695 06:36:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:06.695 { 00:11:06.695 "params": { 00:11:06.695 "name": "Nvme$subsystem", 00:11:06.695 "trtype": "$TEST_TRANSPORT", 00:11:06.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:06.695 "adrfam": "ipv4", 00:11:06.695 "trsvcid": "$NVMF_PORT", 00:11:06.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:06.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:06.695 "hdgst": ${hdgst:-false}, 00:11:06.695 "ddgst": ${ddgst:-false} 00:11:06.695 }, 00:11:06.695 "method": "bdev_nvme_attach_controller" 00:11:06.695 } 00:11:06.695 EOF 00:11:06.695 )") 00:11:06.695 06:36:02 -- nvmf/common.sh@542 -- # cat 00:11:06.695 06:36:02 -- nvmf/common.sh@544 -- # jq . 00:11:06.695 06:36:02 -- nvmf/common.sh@545 -- # IFS=, 00:11:06.695 06:36:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:06.695 "params": { 00:11:06.695 "name": "Nvme1", 00:11:06.695 "trtype": "tcp", 00:11:06.695 "traddr": "10.0.0.2", 00:11:06.695 "adrfam": "ipv4", 00:11:06.695 "trsvcid": "4420", 00:11:06.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:06.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:06.695 "hdgst": false, 00:11:06.695 "ddgst": false 00:11:06.695 }, 00:11:06.695 "method": "bdev_nvme_attach_controller" 00:11:06.695 }' 00:11:06.695 [2024-12-05 06:36:02.125122] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:06.695 [2024-12-05 06:36:02.125225] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75825 ] 00:11:06.955 [2024-12-05 06:36:02.266521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:06.955 [2024-12-05 06:36:02.309425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.955 [2024-12-05 06:36:02.309565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.955 [2024-12-05 06:36:02.309796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.215 [2024-12-05 06:36:02.444421] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:07.215 [2024-12-05 06:36:02.444730] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:07.215 I/O targets: 00:11:07.215 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:07.215 00:11:07.215 00:11:07.216 CUnit - A unit testing framework for C - Version 2.1-3 00:11:07.216 http://cunit.sourceforge.net/ 00:11:07.216 00:11:07.216 00:11:07.216 Suite: bdevio tests on: Nvme1n1 00:11:07.216 Test: blockdev write read block ...passed 00:11:07.216 Test: blockdev write zeroes read block ...passed 00:11:07.216 Test: blockdev write zeroes read no split ...passed 00:11:07.216 Test: blockdev write zeroes read split ...passed 00:11:07.216 Test: blockdev write zeroes read split partial ...passed 00:11:07.216 Test: blockdev reset ...[2024-12-05 06:36:02.476424] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:07.216 [2024-12-05 06:36:02.476663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f72a0 (9): Bad file descriptor 00:11:07.216 [2024-12-05 06:36:02.494630] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:07.216 passed 00:11:07.216 Test: blockdev write read 8 blocks ...passed 00:11:07.216 Test: blockdev write read size > 128k ...passed 00:11:07.216 Test: blockdev write read invalid size ...passed 00:11:07.216 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:07.216 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:07.216 Test: blockdev write read max offset ...passed 00:11:07.216 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:07.216 Test: blockdev writev readv 8 blocks ...passed 00:11:07.216 Test: blockdev writev readv 30 x 1block ...passed 00:11:07.216 Test: blockdev writev readv block ...passed 00:11:07.216 Test: blockdev writev readv size > 128k ...passed 00:11:07.216 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:07.216 Test: blockdev comparev and writev ...[2024-12-05 06:36:02.502186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.216 [2024-12-05 06:36:02.502249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:07.216 [2024-12-05 06:36:02.502287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.216 [2024-12-05 06:36:02.502298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:07.216 [2024-12-05 06:36:02.502656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.216 [2024-12-05 06:36:02.502681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:07.216 [2024-12-05 06:36:02.502698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.216 [2024-12-05 06:36:02.502709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:07.216 [2024-12-05 06:36:02.503041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.216 [2024-12-05 06:36:02.503069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:07.216 [2024-12-05 06:36:02.503088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.216 [2024-12-05 06:36:02.503098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:07.216 [2024-12-05 06:36:02.503426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.216 [2024-12-05 06:36:02.503454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:07.216 [2024-12-05 06:36:02.503473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.216 [2024-12-05 06:36:02.503483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:07.216 passed 00:11:07.216 Test: blockdev nvme passthru rw ...passed 00:11:07.216 Test: blockdev nvme passthru vendor specific ...[2024-12-05 06:36:02.504484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.216 [2024-12-05 06:36:02.504512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:07.216 [2024-12-05 06:36:02.504619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.216 [2024-12-05 06:36:02.504636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:07.216 [2024-12-05 06:36:02.504737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.216 [2024-12-05 06:36:02.504758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:07.216 [2024-12-05 06:36:02.504857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.216 [2024-12-05 06:36:02.504878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:07.216 passed 00:11:07.216 Test: blockdev nvme admin passthru ...passed 00:11:07.216 Test: blockdev copy ...passed 00:11:07.216 00:11:07.216 Run Summary: Type Total Ran Passed Failed Inactive 00:11:07.216 suites 1 1 n/a 0 0 00:11:07.216 tests 23 23 23 0 0 00:11:07.216 asserts 152 152 152 0 n/a 00:11:07.216 00:11:07.216 Elapsed time = 0.144 seconds 00:11:07.216 06:36:02 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.216 06:36:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.216 06:36:02 -- common/autotest_common.sh@10 -- # set +x 00:11:07.216 06:36:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.216 06:36:02 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:07.216 06:36:02 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:07.216 06:36:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:07.216 06:36:02 -- nvmf/common.sh@116 -- # sync 00:11:07.494 06:36:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:07.494 06:36:02 -- nvmf/common.sh@119 -- # set +e 00:11:07.494 06:36:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:07.494 06:36:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:07.494 rmmod nvme_tcp 00:11:07.494 rmmod nvme_fabrics 00:11:07.494 rmmod nvme_keyring 00:11:07.494 06:36:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:07.494 06:36:02 -- nvmf/common.sh@123 -- # set -e 00:11:07.494 06:36:02 -- nvmf/common.sh@124 -- # return 0 00:11:07.494 06:36:02 -- nvmf/common.sh@477 -- # '[' -n 75786 ']' 00:11:07.494 06:36:02 -- nvmf/common.sh@478 -- # killprocess 75786 00:11:07.494 06:36:02 -- common/autotest_common.sh@936 -- # '[' -z 75786 ']' 00:11:07.494 06:36:02 -- common/autotest_common.sh@940 -- # kill -0 75786 00:11:07.494 06:36:02 -- common/autotest_common.sh@941 -- # uname 00:11:07.494 06:36:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:07.494 06:36:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75786 00:11:07.494 06:36:02 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:11:07.494 06:36:02 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:11:07.494 killing process with pid 75786 00:11:07.494 06:36:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75786' 00:11:07.495 06:36:02 -- common/autotest_common.sh@955 -- # kill 75786 00:11:07.495 06:36:02 -- common/autotest_common.sh@960 -- # wait 75786 00:11:07.495 06:36:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:07.495 06:36:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:07.495 06:36:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:07.495 06:36:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:07.495 06:36:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:07.495 06:36:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.495 06:36:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.495 06:36:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.759 06:36:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:07.759 00:11:07.759 real 0m2.595s 00:11:07.759 user 0m8.482s 00:11:07.759 sys 0m0.675s 00:11:07.759 06:36:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:07.759 06:36:02 -- common/autotest_common.sh@10 -- # set +x 00:11:07.759 ************************************ 00:11:07.759 END TEST nvmf_bdevio 00:11:07.759 ************************************ 00:11:07.759 06:36:03 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:11:07.759 06:36:03 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:07.759 06:36:03 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:07.759 06:36:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:07.759 06:36:03 -- common/autotest_common.sh@10 -- # set +x 00:11:07.759 ************************************ 00:11:07.759 START TEST nvmf_bdevio_no_huge 00:11:07.759 ************************************ 00:11:07.759 06:36:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:07.759 * Looking for test storage... 00:11:07.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:07.759 06:36:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:07.759 06:36:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:07.759 06:36:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:07.759 06:36:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:07.759 06:36:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:07.759 06:36:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:07.759 06:36:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:07.759 06:36:03 -- scripts/common.sh@335 -- # IFS=.-: 00:11:07.759 06:36:03 -- scripts/common.sh@335 -- # read -ra ver1 00:11:07.759 06:36:03 -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.759 06:36:03 -- scripts/common.sh@336 -- # read -ra ver2 00:11:07.759 06:36:03 -- scripts/common.sh@337 -- # local 'op=<' 00:11:07.759 06:36:03 -- scripts/common.sh@339 -- # ver1_l=2 00:11:07.759 06:36:03 -- scripts/common.sh@340 -- # ver2_l=1 00:11:07.759 06:36:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:07.759 06:36:03 -- scripts/common.sh@343 -- # case "$op" in 00:11:07.759 06:36:03 -- scripts/common.sh@344 -- # : 1 00:11:07.759 06:36:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:07.759 06:36:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.759 06:36:03 -- scripts/common.sh@364 -- # decimal 1 00:11:07.759 06:36:03 -- scripts/common.sh@352 -- # local d=1 00:11:07.759 06:36:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.759 06:36:03 -- scripts/common.sh@354 -- # echo 1 00:11:07.759 06:36:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:07.759 06:36:03 -- scripts/common.sh@365 -- # decimal 2 00:11:07.759 06:36:03 -- scripts/common.sh@352 -- # local d=2 00:11:07.759 06:36:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.759 06:36:03 -- scripts/common.sh@354 -- # echo 2 00:11:07.759 06:36:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:07.759 06:36:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:07.759 06:36:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:07.759 06:36:03 -- scripts/common.sh@367 -- # return 0 00:11:07.759 06:36:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.759 06:36:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.759 --rc genhtml_branch_coverage=1 00:11:07.759 --rc genhtml_function_coverage=1 00:11:07.759 --rc genhtml_legend=1 00:11:07.759 --rc geninfo_all_blocks=1 00:11:07.759 --rc geninfo_unexecuted_blocks=1 00:11:07.759 00:11:07.759 ' 00:11:07.759 06:36:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.759 --rc genhtml_branch_coverage=1 00:11:07.759 --rc genhtml_function_coverage=1 00:11:07.759 --rc genhtml_legend=1 00:11:07.759 --rc geninfo_all_blocks=1 00:11:07.759 --rc geninfo_unexecuted_blocks=1 00:11:07.759 00:11:07.759 ' 00:11:07.759 06:36:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.759 --rc genhtml_branch_coverage=1 00:11:07.759 --rc genhtml_function_coverage=1 00:11:07.759 --rc genhtml_legend=1 00:11:07.759 --rc geninfo_all_blocks=1 00:11:07.759 --rc geninfo_unexecuted_blocks=1 00:11:07.759 00:11:07.759 ' 00:11:07.759 06:36:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.759 --rc genhtml_branch_coverage=1 00:11:07.759 --rc genhtml_function_coverage=1 00:11:07.759 --rc genhtml_legend=1 00:11:07.759 --rc geninfo_all_blocks=1 00:11:07.759 --rc geninfo_unexecuted_blocks=1 00:11:07.759 00:11:07.759 ' 00:11:07.759 06:36:03 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:07.759 06:36:03 -- nvmf/common.sh@7 -- # uname -s 00:11:07.759 06:36:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.759 06:36:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.759 06:36:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.759 06:36:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.759 06:36:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.759 06:36:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.759 06:36:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.759 06:36:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.759 06:36:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.759 06:36:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.759 06:36:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:11:07.759 06:36:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:11:07.759 06:36:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.759 06:36:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.759 06:36:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:07.759 06:36:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:07.759 06:36:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.759 06:36:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.759 06:36:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.759 06:36:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.759 06:36:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.759 06:36:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.759 06:36:03 -- paths/export.sh@5 -- # export PATH 00:11:07.759 06:36:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.759 06:36:03 -- nvmf/common.sh@46 -- # : 0 00:11:07.759 06:36:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:07.759 06:36:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:07.759 06:36:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:07.759 06:36:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.759 06:36:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.759 06:36:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:07.759 06:36:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:07.759 06:36:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:07.760 06:36:03 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:07.760 06:36:03 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:07.760 06:36:03 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:07.760 06:36:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:08.019 06:36:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.019 06:36:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:08.019 06:36:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:08.019 06:36:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:08.019 06:36:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.019 06:36:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.019 06:36:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.019 06:36:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:08.019 06:36:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:08.019 06:36:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:08.019 06:36:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:08.019 06:36:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:08.019 06:36:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:08.019 06:36:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.019 06:36:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.019 06:36:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:08.019 06:36:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:08.019 06:36:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:08.019 06:36:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:08.019 06:36:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:08.019 06:36:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.019 06:36:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:08.019 06:36:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:08.019 06:36:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:08.019 06:36:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:08.019 06:36:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:08.019 06:36:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:08.019 Cannot find device "nvmf_tgt_br" 00:11:08.019 06:36:03 -- nvmf/common.sh@154 -- # true 00:11:08.019 06:36:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:08.019 Cannot find device "nvmf_tgt_br2" 00:11:08.019 06:36:03 -- nvmf/common.sh@155 -- # true 00:11:08.019 06:36:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:08.019 06:36:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:08.019 Cannot find device "nvmf_tgt_br" 00:11:08.019 06:36:03 -- nvmf/common.sh@157 -- # true 00:11:08.019 06:36:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:08.019 Cannot find device "nvmf_tgt_br2" 00:11:08.019 06:36:03 -- nvmf/common.sh@158 -- # true 00:11:08.019 06:36:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:08.019 06:36:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:08.019 06:36:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:08.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.019 06:36:03 -- nvmf/common.sh@161 -- # true 00:11:08.019 06:36:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:08.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.019 06:36:03 -- nvmf/common.sh@162 -- # true 00:11:08.019 06:36:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:08.019 06:36:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:08.019 06:36:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:08.019 06:36:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:08.019 06:36:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:08.019 06:36:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:08.019 06:36:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:08.019 06:36:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:08.019 06:36:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:08.019 06:36:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:08.019 06:36:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:08.019 06:36:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:08.020 06:36:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:08.020 06:36:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:08.020 06:36:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:08.020 06:36:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:08.020 06:36:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:08.279 06:36:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:08.279 06:36:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:08.279 06:36:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:08.279 06:36:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:08.279 06:36:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:08.279 06:36:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:08.279 06:36:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:08.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:11:08.279 00:11:08.279 --- 10.0.0.2 ping statistics --- 00:11:08.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.279 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:08.279 06:36:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:08.279 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:08.279 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:11:08.279 00:11:08.279 --- 10.0.0.3 ping statistics --- 00:11:08.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.279 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:08.279 06:36:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:08.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:11:08.279 00:11:08.279 --- 10.0.0.1 ping statistics --- 00:11:08.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.279 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:08.279 06:36:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.279 06:36:03 -- nvmf/common.sh@421 -- # return 0 00:11:08.279 06:36:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:08.279 06:36:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.279 06:36:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:08.279 06:36:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:08.279 06:36:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.279 06:36:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:08.279 06:36:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:08.279 06:36:03 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:08.279 06:36:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:08.279 06:36:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:08.279 06:36:03 -- common/autotest_common.sh@10 -- # set +x 00:11:08.279 06:36:03 -- nvmf/common.sh@469 -- # nvmfpid=76005 00:11:08.279 06:36:03 -- nvmf/common.sh@470 -- # waitforlisten 76005 00:11:08.279 06:36:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:11:08.279 06:36:03 -- common/autotest_common.sh@829 -- # '[' -z 76005 ']' 00:11:08.279 06:36:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.279 06:36:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.279 06:36:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.279 06:36:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.279 06:36:03 -- common/autotest_common.sh@10 -- # set +x 00:11:08.279 [2024-12-05 06:36:03.627307] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:08.279 [2024-12-05 06:36:03.627423] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:11:08.538 [2024-12-05 06:36:03.761721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.538 [2024-12-05 06:36:03.877589] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:08.538 [2024-12-05 06:36:03.877784] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.538 [2024-12-05 06:36:03.877802] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.538 [2024-12-05 06:36:03.877814] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.538 [2024-12-05 06:36:03.877970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:08.538 [2024-12-05 06:36:03.878651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:08.538 [2024-12-05 06:36:03.878761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:08.538 [2024-12-05 06:36:03.878769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.474 06:36:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.474 06:36:04 -- common/autotest_common.sh@862 -- # return 0 00:11:09.474 06:36:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:09.474 06:36:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:09.474 06:36:04 -- common/autotest_common.sh@10 -- # set +x 00:11:09.474 06:36:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.474 06:36:04 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.474 06:36:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.474 06:36:04 -- common/autotest_common.sh@10 -- # set +x 00:11:09.474 [2024-12-05 06:36:04.631074] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.474 06:36:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.474 06:36:04 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:09.474 06:36:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.474 06:36:04 -- common/autotest_common.sh@10 -- # set +x 00:11:09.474 Malloc0 00:11:09.474 06:36:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.474 06:36:04 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.474 06:36:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.474 06:36:04 -- common/autotest_common.sh@10 -- # set +x 00:11:09.474 06:36:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.474 06:36:04 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.474 06:36:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.474 06:36:04 -- common/autotest_common.sh@10 -- # set +x 00:11:09.474 06:36:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.474 06:36:04 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.474 06:36:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.474 06:36:04 -- common/autotest_common.sh@10 -- # set +x 00:11:09.474 [2024-12-05 06:36:04.677704] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.474 06:36:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.474 06:36:04 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:11:09.474 06:36:04 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:09.474 06:36:04 -- nvmf/common.sh@520 -- # config=() 00:11:09.474 06:36:04 -- nvmf/common.sh@520 -- # local subsystem config 00:11:09.474 06:36:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:09.474 06:36:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:09.474 { 00:11:09.474 "params": { 00:11:09.474 "name": "Nvme$subsystem", 00:11:09.474 "trtype": "$TEST_TRANSPORT", 00:11:09.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.474 "adrfam": "ipv4", 00:11:09.474 "trsvcid": "$NVMF_PORT", 00:11:09.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.474 "hdgst": ${hdgst:-false}, 00:11:09.474 "ddgst": ${ddgst:-false} 00:11:09.474 }, 00:11:09.474 "method": "bdev_nvme_attach_controller" 00:11:09.474 } 00:11:09.474 EOF 00:11:09.474 )") 00:11:09.474 06:36:04 -- nvmf/common.sh@542 -- # cat 00:11:09.474 06:36:04 -- nvmf/common.sh@544 -- # jq . 00:11:09.474 06:36:04 -- nvmf/common.sh@545 -- # IFS=, 00:11:09.474 06:36:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:09.474 "params": { 00:11:09.474 "name": "Nvme1", 00:11:09.474 "trtype": "tcp", 00:11:09.474 "traddr": "10.0.0.2", 00:11:09.474 "adrfam": "ipv4", 00:11:09.474 "trsvcid": "4420", 00:11:09.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.474 "hdgst": false, 00:11:09.474 "ddgst": false 00:11:09.474 }, 00:11:09.474 "method": "bdev_nvme_attach_controller" 00:11:09.474 }' 00:11:09.474 [2024-12-05 06:36:04.728210] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:09.474 [2024-12-05 06:36:04.728331] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76041 ] 00:11:09.474 [2024-12-05 06:36:04.859757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:09.742 [2024-12-05 06:36:04.944037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.742 [2024-12-05 06:36:04.944170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.742 [2024-12-05 06:36:04.944167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.742 [2024-12-05 06:36:05.082566] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:09.742 [2024-12-05 06:36:05.082622] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:09.743 I/O targets: 00:11:09.743 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:09.743 00:11:09.743 00:11:09.743 CUnit - A unit testing framework for C - Version 2.1-3 00:11:09.743 http://cunit.sourceforge.net/ 00:11:09.743 00:11:09.743 00:11:09.743 Suite: bdevio tests on: Nvme1n1 00:11:09.743 Test: blockdev write read block ...passed 00:11:09.743 Test: blockdev write zeroes read block ...passed 00:11:09.743 Test: blockdev write zeroes read no split ...passed 00:11:09.743 Test: blockdev write zeroes read split ...passed 00:11:09.743 Test: blockdev write zeroes read split partial ...passed 00:11:09.743 Test: blockdev reset ...[2024-12-05 06:36:05.122443] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:09.743 [2024-12-05 06:36:05.122553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20aa760 (9): Bad file descriptor 00:11:09.743 [2024-12-05 06:36:05.142029] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:09.743 passed 00:11:09.743 Test: blockdev write read 8 blocks ...passed 00:11:09.743 Test: blockdev write read size > 128k ...passed 00:11:09.743 Test: blockdev write read invalid size ...passed 00:11:09.743 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.743 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.743 Test: blockdev write read max offset ...passed 00:11:09.743 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.743 Test: blockdev writev readv 8 blocks ...passed 00:11:09.743 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.743 Test: blockdev writev readv block ...passed 00:11:09.743 Test: blockdev writev readv size > 128k ...passed 00:11:09.743 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.743 Test: blockdev comparev and writev ...[2024-12-05 06:36:05.151109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.743 [2024-12-05 06:36:05.151403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:09.743 [2024-12-05 06:36:05.151599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.743 [2024-12-05 06:36:05.151789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:09.743 [2024-12-05 06:36:05.152273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.743 [2024-12-05 06:36:05.152493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:09.744 [2024-12-05 06:36:05.152686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.744 [2024-12-05 06:36:05.152858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:09.744 [2024-12-05 06:36:05.153369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.744 [2024-12-05 06:36:05.153521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:09.744 [2024-12-05 06:36:05.153714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.744 [2024-12-05 06:36:05.153882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:09.744 [2024-12-05 06:36:05.154400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.744 [2024-12-05 06:36:05.154584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:09.744 [2024-12-05 06:36:05.154764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.744 [2024-12-05 06:36:05.154979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:09.744 passed 00:11:09.744 Test: blockdev nvme passthru rw ...passed 00:11:09.744 Test: blockdev nvme passthru vendor specific ...[2024-12-05 06:36:05.156169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.744 [2024-12-05 06:36:05.156377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:09.744 [2024-12-05 06:36:05.156680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.744 [2024-12-05 06:36:05.156832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:09.744 [2024-12-05 06:36:05.157148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.744 [2024-12-05 06:36:05.157334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:09.745 [2024-12-05 06:36:05.157631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.745 [2024-12-05 06:36:05.157787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:09.745 passed 00:11:09.745 Test: blockdev nvme admin passthru ...passed 00:11:09.745 Test: blockdev copy ...passed 00:11:09.745 00:11:09.745 Run Summary: Type Total Ran Passed Failed Inactive 00:11:09.745 suites 1 1 n/a 0 0 00:11:09.745 tests 23 23 23 0 0 00:11:09.745 asserts 152 152 152 0 n/a 00:11:09.745 00:11:09.745 Elapsed time = 0.179 seconds 00:11:10.008 06:36:05 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.008 06:36:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.008 06:36:05 -- common/autotest_common.sh@10 -- # set +x 00:11:10.008 06:36:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.008 06:36:05 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:10.008 06:36:05 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:10.008 06:36:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:10.008 06:36:05 -- nvmf/common.sh@116 -- # sync 00:11:10.266 06:36:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:10.266 06:36:05 -- nvmf/common.sh@119 -- # set +e 00:11:10.266 06:36:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:10.266 06:36:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:10.266 rmmod nvme_tcp 00:11:10.266 rmmod nvme_fabrics 00:11:10.266 rmmod nvme_keyring 00:11:10.266 06:36:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:10.266 06:36:05 -- nvmf/common.sh@123 -- # set -e 00:11:10.266 06:36:05 -- nvmf/common.sh@124 -- # return 0 00:11:10.266 06:36:05 -- nvmf/common.sh@477 -- # '[' -n 76005 ']' 00:11:10.266 06:36:05 -- nvmf/common.sh@478 -- # killprocess 76005 00:11:10.266 06:36:05 -- common/autotest_common.sh@936 -- # '[' -z 76005 ']' 00:11:10.266 06:36:05 -- common/autotest_common.sh@940 -- # kill -0 76005 00:11:10.266 06:36:05 -- common/autotest_common.sh@941 -- # uname 00:11:10.266 06:36:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:10.266 06:36:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76005 00:11:10.266 06:36:05 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:11:10.266 06:36:05 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:11:10.266 killing process with pid 76005 00:11:10.266 06:36:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76005' 00:11:10.266 06:36:05 -- common/autotest_common.sh@955 -- # kill 76005 00:11:10.266 06:36:05 -- common/autotest_common.sh@960 -- # wait 76005 00:11:10.525 06:36:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:10.525 06:36:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:10.525 06:36:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:10.525 06:36:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.525 06:36:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:10.525 06:36:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.525 06:36:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.525 06:36:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.525 06:36:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:10.525 00:11:10.525 real 0m2.934s 00:11:10.525 user 0m9.220s 00:11:10.525 sys 0m1.180s 00:11:10.525 06:36:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:10.525 06:36:05 -- common/autotest_common.sh@10 -- # set +x 00:11:10.525 ************************************ 00:11:10.525 END TEST nvmf_bdevio_no_huge 00:11:10.525 ************************************ 00:11:10.785 06:36:05 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:10.785 06:36:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:10.785 06:36:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.785 06:36:05 -- common/autotest_common.sh@10 -- # set +x 00:11:10.785 ************************************ 00:11:10.785 START TEST nvmf_tls 00:11:10.785 ************************************ 00:11:10.785 06:36:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:10.785 * Looking for test storage... 00:11:10.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:10.785 06:36:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:10.785 06:36:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:10.785 06:36:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:10.785 06:36:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:10.785 06:36:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:10.785 06:36:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:10.785 06:36:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:10.785 06:36:06 -- scripts/common.sh@335 -- # IFS=.-: 00:11:10.785 06:36:06 -- scripts/common.sh@335 -- # read -ra ver1 00:11:10.785 06:36:06 -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.785 06:36:06 -- scripts/common.sh@336 -- # read -ra ver2 00:11:10.785 06:36:06 -- scripts/common.sh@337 -- # local 'op=<' 00:11:10.785 06:36:06 -- scripts/common.sh@339 -- # ver1_l=2 00:11:10.785 06:36:06 -- scripts/common.sh@340 -- # ver2_l=1 00:11:10.785 06:36:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:10.785 06:36:06 -- scripts/common.sh@343 -- # case "$op" in 00:11:10.785 06:36:06 -- scripts/common.sh@344 -- # : 1 00:11:10.785 06:36:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:10.785 06:36:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.785 06:36:06 -- scripts/common.sh@364 -- # decimal 1 00:11:10.785 06:36:06 -- scripts/common.sh@352 -- # local d=1 00:11:10.785 06:36:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.785 06:36:06 -- scripts/common.sh@354 -- # echo 1 00:11:10.785 06:36:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:10.785 06:36:06 -- scripts/common.sh@365 -- # decimal 2 00:11:10.785 06:36:06 -- scripts/common.sh@352 -- # local d=2 00:11:10.785 06:36:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.785 06:36:06 -- scripts/common.sh@354 -- # echo 2 00:11:10.785 06:36:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:10.785 06:36:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:10.785 06:36:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:10.785 06:36:06 -- scripts/common.sh@367 -- # return 0 00:11:10.785 06:36:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.785 06:36:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:10.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.785 --rc genhtml_branch_coverage=1 00:11:10.785 --rc genhtml_function_coverage=1 00:11:10.785 --rc genhtml_legend=1 00:11:10.785 --rc geninfo_all_blocks=1 00:11:10.785 --rc geninfo_unexecuted_blocks=1 00:11:10.785 00:11:10.785 ' 00:11:10.785 06:36:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:10.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.785 --rc genhtml_branch_coverage=1 00:11:10.785 --rc genhtml_function_coverage=1 00:11:10.785 --rc genhtml_legend=1 00:11:10.785 --rc geninfo_all_blocks=1 00:11:10.785 --rc geninfo_unexecuted_blocks=1 00:11:10.785 00:11:10.785 ' 00:11:10.785 06:36:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:10.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.785 --rc genhtml_branch_coverage=1 00:11:10.785 --rc genhtml_function_coverage=1 00:11:10.785 --rc genhtml_legend=1 00:11:10.785 --rc geninfo_all_blocks=1 00:11:10.785 --rc geninfo_unexecuted_blocks=1 00:11:10.785 00:11:10.785 ' 00:11:10.785 06:36:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:10.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.785 --rc genhtml_branch_coverage=1 00:11:10.785 --rc genhtml_function_coverage=1 00:11:10.785 --rc genhtml_legend=1 00:11:10.785 --rc geninfo_all_blocks=1 00:11:10.785 --rc geninfo_unexecuted_blocks=1 00:11:10.785 00:11:10.785 ' 00:11:10.785 06:36:06 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:10.785 06:36:06 -- nvmf/common.sh@7 -- # uname -s 00:11:10.785 06:36:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.785 06:36:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.785 06:36:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.785 06:36:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.785 06:36:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.785 06:36:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.785 06:36:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.785 06:36:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.785 06:36:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.785 06:36:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.785 06:36:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:11:10.785 06:36:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:11:10.786 06:36:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.786 06:36:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.786 06:36:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:10.786 06:36:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:10.786 06:36:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.786 06:36:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.786 06:36:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.786 06:36:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.786 06:36:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.786 06:36:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.786 06:36:06 -- paths/export.sh@5 -- # export PATH 00:11:10.786 06:36:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.786 06:36:06 -- nvmf/common.sh@46 -- # : 0 00:11:10.786 06:36:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:10.786 06:36:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:10.786 06:36:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:10.786 06:36:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.786 06:36:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.786 06:36:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:10.786 06:36:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:10.786 06:36:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:10.786 06:36:06 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:10.786 06:36:06 -- target/tls.sh@71 -- # nvmftestinit 00:11:10.786 06:36:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:10.786 06:36:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.786 06:36:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:10.786 06:36:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:10.786 06:36:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:10.786 06:36:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.786 06:36:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.786 06:36:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.786 06:36:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:10.786 06:36:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:10.786 06:36:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:10.786 06:36:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:10.786 06:36:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:10.786 06:36:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:10.786 06:36:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.786 06:36:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.786 06:36:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:10.786 06:36:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:10.786 06:36:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:10.786 06:36:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:10.786 06:36:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:10.786 06:36:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.786 06:36:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:10.786 06:36:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:10.786 06:36:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:10.786 06:36:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:10.786 06:36:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:10.786 06:36:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:10.786 Cannot find device "nvmf_tgt_br" 00:11:10.786 06:36:06 -- nvmf/common.sh@154 -- # true 00:11:10.786 06:36:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:10.786 Cannot find device "nvmf_tgt_br2" 00:11:10.786 06:36:06 -- nvmf/common.sh@155 -- # true 00:11:10.786 06:36:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:10.786 06:36:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:11.045 Cannot find device "nvmf_tgt_br" 00:11:11.045 06:36:06 -- nvmf/common.sh@157 -- # true 00:11:11.045 06:36:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:11.045 Cannot find device "nvmf_tgt_br2" 00:11:11.045 06:36:06 -- nvmf/common.sh@158 -- # true 00:11:11.045 06:36:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:11.045 06:36:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:11.045 06:36:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:11.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.045 06:36:06 -- nvmf/common.sh@161 -- # true 00:11:11.045 06:36:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:11.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.045 06:36:06 -- nvmf/common.sh@162 -- # true 00:11:11.045 06:36:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:11.045 06:36:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:11.045 06:36:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:11.045 06:36:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:11.045 06:36:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:11.045 06:36:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:11.045 06:36:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:11.045 06:36:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:11.045 06:36:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:11.045 06:36:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:11.045 06:36:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:11.045 06:36:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:11.045 06:36:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:11.045 06:36:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:11.045 06:36:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:11.045 06:36:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:11.045 06:36:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:11.045 06:36:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:11.045 06:36:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:11.045 06:36:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:11.045 06:36:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:11.304 06:36:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:11.304 06:36:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:11.304 06:36:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:11.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:11:11.304 00:11:11.304 --- 10.0.0.2 ping statistics --- 00:11:11.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.304 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:11:11.304 06:36:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:11.304 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:11.304 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:11:11.304 00:11:11.304 --- 10.0.0.3 ping statistics --- 00:11:11.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.304 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:11.304 06:36:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:11.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:11.304 00:11:11.304 --- 10.0.0.1 ping statistics --- 00:11:11.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.304 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:11.304 06:36:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.304 06:36:06 -- nvmf/common.sh@421 -- # return 0 00:11:11.304 06:36:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:11.304 06:36:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.304 06:36:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:11.304 06:36:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:11.304 06:36:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.304 06:36:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:11.304 06:36:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:11.304 06:36:06 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:11:11.304 06:36:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:11.304 06:36:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:11.304 06:36:06 -- common/autotest_common.sh@10 -- # set +x 00:11:11.304 06:36:06 -- nvmf/common.sh@469 -- # nvmfpid=76222 00:11:11.304 06:36:06 -- nvmf/common.sh@470 -- # waitforlisten 76222 00:11:11.304 06:36:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:11:11.304 06:36:06 -- common/autotest_common.sh@829 -- # '[' -z 76222 ']' 00:11:11.304 06:36:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.304 06:36:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:11.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.304 06:36:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.304 06:36:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:11.304 06:36:06 -- common/autotest_common.sh@10 -- # set +x 00:11:11.304 [2024-12-05 06:36:06.600673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:11.304 [2024-12-05 06:36:06.600751] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.304 [2024-12-05 06:36:06.739731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.563 [2024-12-05 06:36:06.780719] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:11.564 [2024-12-05 06:36:06.780894] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.564 [2024-12-05 06:36:06.780909] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.564 [2024-12-05 06:36:06.780921] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.564 [2024-12-05 06:36:06.780957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.564 06:36:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.564 06:36:06 -- common/autotest_common.sh@862 -- # return 0 00:11:11.564 06:36:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:11.564 06:36:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:11.564 06:36:06 -- common/autotest_common.sh@10 -- # set +x 00:11:11.564 06:36:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.564 06:36:06 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:11:11.564 06:36:06 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:11:11.822 true 00:11:11.822 06:36:07 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:11.822 06:36:07 -- target/tls.sh@82 -- # jq -r .tls_version 00:11:12.081 06:36:07 -- target/tls.sh@82 -- # version=0 00:11:12.081 06:36:07 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:11:12.081 06:36:07 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:12.339 06:36:07 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:12.339 06:36:07 -- target/tls.sh@90 -- # jq -r .tls_version 00:11:12.596 06:36:07 -- target/tls.sh@90 -- # version=13 00:11:12.596 06:36:07 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:11:12.596 06:36:07 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:11:12.889 06:36:08 -- target/tls.sh@98 -- # jq -r .tls_version 00:11:12.889 06:36:08 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:13.148 06:36:08 -- target/tls.sh@98 -- # version=7 00:11:13.148 06:36:08 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:11:13.148 06:36:08 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:11:13.148 06:36:08 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:13.148 06:36:08 -- target/tls.sh@105 -- # ktls=false 00:11:13.148 06:36:08 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:11:13.148 06:36:08 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:11:13.408 06:36:08 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:13.408 06:36:08 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:11:13.666 06:36:09 -- target/tls.sh@113 -- # ktls=true 00:11:13.666 06:36:09 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:11:13.666 06:36:09 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:11:13.925 06:36:09 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:11:13.925 06:36:09 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:14.183 06:36:09 -- target/tls.sh@121 -- # ktls=false 00:11:14.183 06:36:09 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:11:14.183 06:36:09 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:11:14.183 06:36:09 -- target/tls.sh@49 -- # local key hash crc 00:11:14.183 06:36:09 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:11:14.183 06:36:09 -- target/tls.sh@51 -- # hash=01 00:11:14.183 06:36:09 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:11:14.183 06:36:09 -- target/tls.sh@52 -- # head -c 4 00:11:14.183 06:36:09 -- target/tls.sh@52 -- # tail -c8 00:11:14.183 06:36:09 -- target/tls.sh@52 -- # gzip -1 -c 00:11:14.183 06:36:09 -- target/tls.sh@52 -- # crc='p$H�' 00:11:14.183 06:36:09 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:14.183 06:36:09 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:11:14.183 06:36:09 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:14.183 06:36:09 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:14.183 06:36:09 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:11:14.183 06:36:09 -- target/tls.sh@49 -- # local key hash crc 00:11:14.183 06:36:09 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:11:14.183 06:36:09 -- target/tls.sh@51 -- # hash=01 00:11:14.183 06:36:09 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:11:14.183 06:36:09 -- target/tls.sh@52 -- # gzip -1 -c 00:11:14.183 06:36:09 -- target/tls.sh@52 -- # tail -c8 00:11:14.184 06:36:09 -- target/tls.sh@52 -- # head -c 4 00:11:14.184 06:36:09 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:11:14.184 06:36:09 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:14.184 06:36:09 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:11:14.184 06:36:09 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:14.184 06:36:09 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:14.184 06:36:09 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:14.184 06:36:09 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:14.184 06:36:09 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:14.184 06:36:09 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:14.184 06:36:09 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:14.184 06:36:09 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:14.184 06:36:09 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:14.441 06:36:09 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:15.007 06:36:10 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:15.007 06:36:10 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:15.007 06:36:10 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:15.007 [2024-12-05 06:36:10.351836] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.007 06:36:10 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:15.265 06:36:10 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:15.523 [2024-12-05 06:36:10.851945] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:15.523 [2024-12-05 06:36:10.852165] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.523 06:36:10 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:15.782 malloc0 00:11:15.782 06:36:11 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:16.040 06:36:11 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:16.299 06:36:11 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:26.308 Initializing NVMe Controllers 00:11:26.308 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:26.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:26.308 Initialization complete. Launching workers. 00:11:26.308 ======================================================== 00:11:26.308 Latency(us) 00:11:26.308 Device Information : IOPS MiB/s Average min max 00:11:26.308 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10489.59 40.97 6102.54 1373.75 8703.03 00:11:26.308 ======================================================== 00:11:26.308 Total : 10489.59 40.97 6102.54 1373.75 8703.03 00:11:26.308 00:11:26.309 06:36:21 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:26.309 06:36:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:26.309 06:36:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:26.309 06:36:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:26.309 06:36:21 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:26.309 06:36:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:26.309 06:36:21 -- target/tls.sh@28 -- # bdevperf_pid=76457 00:11:26.309 06:36:21 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:26.309 06:36:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:26.309 06:36:21 -- target/tls.sh@31 -- # waitforlisten 76457 /var/tmp/bdevperf.sock 00:11:26.309 06:36:21 -- common/autotest_common.sh@829 -- # '[' -z 76457 ']' 00:11:26.309 06:36:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:26.309 06:36:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:26.309 06:36:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:26.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:26.309 06:36:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:26.309 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:11:26.309 [2024-12-05 06:36:21.737497] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:26.309 [2024-12-05 06:36:21.737800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76457 ] 00:11:26.567 [2024-12-05 06:36:21.878475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.567 [2024-12-05 06:36:21.918997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.503 06:36:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.503 06:36:22 -- common/autotest_common.sh@862 -- # return 0 00:11:27.503 06:36:22 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:27.503 [2024-12-05 06:36:22.938854] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:27.762 TLSTESTn1 00:11:27.762 06:36:23 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:27.762 Running I/O for 10 seconds... 00:11:37.773 00:11:37.773 Latency(us) 00:11:37.773 [2024-12-05T06:36:33.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.773 [2024-12-05T06:36:33.239Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:37.773 Verification LBA range: start 0x0 length 0x2000 00:11:37.773 TLSTESTn1 : 10.02 5906.31 23.07 0.00 0.00 21634.83 4200.26 20256.58 00:11:37.773 [2024-12-05T06:36:33.239Z] =================================================================================================================== 00:11:37.773 [2024-12-05T06:36:33.239Z] Total : 5906.31 23.07 0.00 0.00 21634.83 4200.26 20256.58 00:11:37.773 0 00:11:37.773 06:36:33 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:37.773 06:36:33 -- target/tls.sh@45 -- # killprocess 76457 00:11:37.773 06:36:33 -- common/autotest_common.sh@936 -- # '[' -z 76457 ']' 00:11:37.773 06:36:33 -- common/autotest_common.sh@940 -- # kill -0 76457 00:11:37.773 06:36:33 -- common/autotest_common.sh@941 -- # uname 00:11:37.773 06:36:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:37.773 06:36:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76457 00:11:37.773 killing process with pid 76457 00:11:37.773 Received shutdown signal, test time was about 10.000000 seconds 00:11:37.773 00:11:37.773 Latency(us) 00:11:37.773 [2024-12-05T06:36:33.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.773 [2024-12-05T06:36:33.239Z] =================================================================================================================== 00:11:37.773 [2024-12-05T06:36:33.239Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:37.773 06:36:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:37.773 06:36:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:37.773 06:36:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76457' 00:11:37.773 06:36:33 -- common/autotest_common.sh@955 -- # kill 76457 00:11:37.773 06:36:33 -- common/autotest_common.sh@960 -- # wait 76457 00:11:38.032 06:36:33 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:38.032 06:36:33 -- common/autotest_common.sh@650 -- # local es=0 00:11:38.032 06:36:33 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:38.032 06:36:33 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:38.032 06:36:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:38.032 06:36:33 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:38.032 06:36:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:38.032 06:36:33 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:38.032 06:36:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:38.032 06:36:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:38.032 06:36:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:38.032 06:36:33 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:11:38.032 06:36:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:38.032 06:36:33 -- target/tls.sh@28 -- # bdevperf_pid=76590 00:11:38.032 06:36:33 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:38.032 06:36:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:38.032 06:36:33 -- target/tls.sh@31 -- # waitforlisten 76590 /var/tmp/bdevperf.sock 00:11:38.032 06:36:33 -- common/autotest_common.sh@829 -- # '[' -z 76590 ']' 00:11:38.032 06:36:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:38.032 06:36:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.032 06:36:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:38.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:38.032 06:36:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.032 06:36:33 -- common/autotest_common.sh@10 -- # set +x 00:11:38.032 [2024-12-05 06:36:33.391938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:38.032 [2024-12-05 06:36:33.392211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76590 ] 00:11:38.290 [2024-12-05 06:36:33.532163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.290 [2024-12-05 06:36:33.568671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.226 06:36:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:39.226 06:36:34 -- common/autotest_common.sh@862 -- # return 0 00:11:39.226 06:36:34 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:39.226 [2024-12-05 06:36:34.581087] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:39.226 [2024-12-05 06:36:34.591330] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:39.226 [2024-12-05 06:36:34.591724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fdb80 (107): Transport endpoint is not connected 00:11:39.226 [2024-12-05 06:36:34.592714] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fdb80 (9): Bad file descriptor 00:11:39.226 [2024-12-05 06:36:34.593710] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:39.226 [2024-12-05 06:36:34.593734] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:39.226 [2024-12-05 06:36:34.593761] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:39.226 request: 00:11:39.226 { 00:11:39.226 "name": "TLSTEST", 00:11:39.226 "trtype": "tcp", 00:11:39.226 "traddr": "10.0.0.2", 00:11:39.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:39.226 "adrfam": "ipv4", 00:11:39.226 "trsvcid": "4420", 00:11:39.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.226 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:11:39.226 "method": "bdev_nvme_attach_controller", 00:11:39.226 "req_id": 1 00:11:39.226 } 00:11:39.226 Got JSON-RPC error response 00:11:39.226 response: 00:11:39.226 { 00:11:39.226 "code": -32602, 00:11:39.226 "message": "Invalid parameters" 00:11:39.226 } 00:11:39.226 06:36:34 -- target/tls.sh@36 -- # killprocess 76590 00:11:39.226 06:36:34 -- common/autotest_common.sh@936 -- # '[' -z 76590 ']' 00:11:39.226 06:36:34 -- common/autotest_common.sh@940 -- # kill -0 76590 00:11:39.226 06:36:34 -- common/autotest_common.sh@941 -- # uname 00:11:39.226 06:36:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:39.226 06:36:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76590 00:11:39.226 killing process with pid 76590 00:11:39.226 Received shutdown signal, test time was about 10.000000 seconds 00:11:39.226 00:11:39.226 Latency(us) 00:11:39.226 [2024-12-05T06:36:34.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.226 [2024-12-05T06:36:34.692Z] =================================================================================================================== 00:11:39.226 [2024-12-05T06:36:34.692Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:39.226 06:36:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:39.226 06:36:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:39.226 06:36:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76590' 00:11:39.226 06:36:34 -- common/autotest_common.sh@955 -- # kill 76590 00:11:39.226 06:36:34 -- common/autotest_common.sh@960 -- # wait 76590 00:11:39.485 06:36:34 -- target/tls.sh@37 -- # return 1 00:11:39.485 06:36:34 -- common/autotest_common.sh@653 -- # es=1 00:11:39.485 06:36:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:39.485 06:36:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:39.485 06:36:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:39.485 06:36:34 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:39.485 06:36:34 -- common/autotest_common.sh@650 -- # local es=0 00:11:39.485 06:36:34 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:39.485 06:36:34 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:39.485 06:36:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.485 06:36:34 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:39.485 06:36:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.485 06:36:34 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:39.485 06:36:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:39.485 06:36:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:39.485 06:36:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:11:39.485 06:36:34 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:39.485 06:36:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:39.485 06:36:34 -- target/tls.sh@28 -- # bdevperf_pid=76618 00:11:39.485 06:36:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:39.485 06:36:34 -- target/tls.sh@31 -- # waitforlisten 76618 /var/tmp/bdevperf.sock 00:11:39.485 06:36:34 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:39.485 06:36:34 -- common/autotest_common.sh@829 -- # '[' -z 76618 ']' 00:11:39.485 06:36:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:39.485 06:36:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:39.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:39.485 06:36:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:39.485 06:36:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:39.485 06:36:34 -- common/autotest_common.sh@10 -- # set +x 00:11:39.485 [2024-12-05 06:36:34.843240] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:39.485 [2024-12-05 06:36:34.843624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76618 ] 00:11:39.744 [2024-12-05 06:36:34.979203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.744 [2024-12-05 06:36:35.013142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.744 06:36:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:39.744 06:36:35 -- common/autotest_common.sh@862 -- # return 0 00:11:39.744 06:36:35 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:40.002 [2024-12-05 06:36:35.312979] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:40.002 [2024-12-05 06:36:35.321265] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:40.002 [2024-12-05 06:36:35.321585] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:40.002 [2024-12-05 06:36:35.321762] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:40.002 [2024-12-05 06:36:35.322682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcebb80 (107): Transport endpoint is not connected 00:11:40.002 [2024-12-05 06:36:35.323674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcebb80 (9): Bad file descriptor 00:11:40.002 [2024-12-05 06:36:35.324677] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:40.002 [2024-12-05 06:36:35.324700] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:40.002 [2024-12-05 06:36:35.324725] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:40.002 request: 00:11:40.003 { 00:11:40.003 "name": "TLSTEST", 00:11:40.003 "trtype": "tcp", 00:11:40.003 "traddr": "10.0.0.2", 00:11:40.003 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:11:40.003 "adrfam": "ipv4", 00:11:40.003 "trsvcid": "4420", 00:11:40.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.003 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:40.003 "method": "bdev_nvme_attach_controller", 00:11:40.003 "req_id": 1 00:11:40.003 } 00:11:40.003 Got JSON-RPC error response 00:11:40.003 response: 00:11:40.003 { 00:11:40.003 "code": -32602, 00:11:40.003 "message": "Invalid parameters" 00:11:40.003 } 00:11:40.003 06:36:35 -- target/tls.sh@36 -- # killprocess 76618 00:11:40.003 06:36:35 -- common/autotest_common.sh@936 -- # '[' -z 76618 ']' 00:11:40.003 06:36:35 -- common/autotest_common.sh@940 -- # kill -0 76618 00:11:40.003 06:36:35 -- common/autotest_common.sh@941 -- # uname 00:11:40.003 06:36:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:40.003 06:36:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76618 00:11:40.003 06:36:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:40.003 killing process with pid 76618 00:11:40.003 Received shutdown signal, test time was about 10.000000 seconds 00:11:40.003 00:11:40.003 Latency(us) 00:11:40.003 [2024-12-05T06:36:35.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.003 [2024-12-05T06:36:35.469Z] =================================================================================================================== 00:11:40.003 [2024-12-05T06:36:35.469Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:40.003 06:36:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:40.003 06:36:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76618' 00:11:40.003 06:36:35 -- common/autotest_common.sh@955 -- # kill 76618 00:11:40.003 06:36:35 -- common/autotest_common.sh@960 -- # wait 76618 00:11:40.261 06:36:35 -- target/tls.sh@37 -- # return 1 00:11:40.261 06:36:35 -- common/autotest_common.sh@653 -- # es=1 00:11:40.261 06:36:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:40.261 06:36:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:40.261 06:36:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:40.261 06:36:35 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:40.261 06:36:35 -- common/autotest_common.sh@650 -- # local es=0 00:11:40.261 06:36:35 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:40.261 06:36:35 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:40.261 06:36:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.261 06:36:35 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:40.261 06:36:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.261 06:36:35 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:40.261 06:36:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:40.261 06:36:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:11:40.261 06:36:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:40.261 06:36:35 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:40.261 06:36:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:40.261 06:36:35 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:40.261 06:36:35 -- target/tls.sh@28 -- # bdevperf_pid=76638 00:11:40.261 06:36:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:40.261 06:36:35 -- target/tls.sh@31 -- # waitforlisten 76638 /var/tmp/bdevperf.sock 00:11:40.261 06:36:35 -- common/autotest_common.sh@829 -- # '[' -z 76638 ']' 00:11:40.261 06:36:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:40.261 06:36:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:40.261 06:36:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:40.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:40.262 06:36:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:40.262 06:36:35 -- common/autotest_common.sh@10 -- # set +x 00:11:40.262 [2024-12-05 06:36:35.554967] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:40.262 [2024-12-05 06:36:35.555274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76638 ] 00:11:40.262 [2024-12-05 06:36:35.687193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.262 [2024-12-05 06:36:35.721087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.198 06:36:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:41.198 06:36:36 -- common/autotest_common.sh@862 -- # return 0 00:11:41.198 06:36:36 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:41.457 [2024-12-05 06:36:36.693565] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:41.457 [2024-12-05 06:36:36.701520] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:41.457 [2024-12-05 06:36:36.701762] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:41.457 [2024-12-05 06:36:36.701934] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:41.457 [2024-12-05 06:36:36.702103] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb72b80 (107): Transport endpoint is not connected 00:11:41.457 [2024-12-05 06:36:36.703093] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb72b80 (9): Bad file descriptor 00:11:41.457 [2024-12-05 06:36:36.704089] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:11:41.457 [2024-12-05 06:36:36.704305] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:41.457 [2024-12-05 06:36:36.704347] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:11:41.457 request: 00:11:41.457 { 00:11:41.457 "name": "TLSTEST", 00:11:41.457 "trtype": "tcp", 00:11:41.457 "traddr": "10.0.0.2", 00:11:41.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:41.457 "adrfam": "ipv4", 00:11:41.457 "trsvcid": "4420", 00:11:41.457 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:11:41.457 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:41.457 "method": "bdev_nvme_attach_controller", 00:11:41.457 "req_id": 1 00:11:41.457 } 00:11:41.457 Got JSON-RPC error response 00:11:41.457 response: 00:11:41.457 { 00:11:41.457 "code": -32602, 00:11:41.457 "message": "Invalid parameters" 00:11:41.457 } 00:11:41.457 06:36:36 -- target/tls.sh@36 -- # killprocess 76638 00:11:41.457 06:36:36 -- common/autotest_common.sh@936 -- # '[' -z 76638 ']' 00:11:41.457 06:36:36 -- common/autotest_common.sh@940 -- # kill -0 76638 00:11:41.457 06:36:36 -- common/autotest_common.sh@941 -- # uname 00:11:41.457 06:36:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:41.457 06:36:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76638 00:11:41.457 killing process with pid 76638 00:11:41.457 Received shutdown signal, test time was about 10.000000 seconds 00:11:41.457 00:11:41.457 Latency(us) 00:11:41.457 [2024-12-05T06:36:36.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.457 [2024-12-05T06:36:36.923Z] =================================================================================================================== 00:11:41.457 [2024-12-05T06:36:36.923Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:41.457 06:36:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:41.457 06:36:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:41.457 06:36:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76638' 00:11:41.457 06:36:36 -- common/autotest_common.sh@955 -- # kill 76638 00:11:41.457 06:36:36 -- common/autotest_common.sh@960 -- # wait 76638 00:11:41.457 06:36:36 -- target/tls.sh@37 -- # return 1 00:11:41.457 06:36:36 -- common/autotest_common.sh@653 -- # es=1 00:11:41.457 06:36:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.457 06:36:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.457 06:36:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.457 06:36:36 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:41.457 06:36:36 -- common/autotest_common.sh@650 -- # local es=0 00:11:41.457 06:36:36 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:41.457 06:36:36 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:41.457 06:36:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.457 06:36:36 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:41.457 06:36:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.457 06:36:36 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:41.457 06:36:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:41.457 06:36:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:41.457 06:36:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:41.457 06:36:36 -- target/tls.sh@23 -- # psk= 00:11:41.457 06:36:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:41.457 06:36:36 -- target/tls.sh@28 -- # bdevperf_pid=76660 00:11:41.457 06:36:36 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:41.457 06:36:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:41.457 06:36:36 -- target/tls.sh@31 -- # waitforlisten 76660 /var/tmp/bdevperf.sock 00:11:41.457 06:36:36 -- common/autotest_common.sh@829 -- # '[' -z 76660 ']' 00:11:41.457 06:36:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:41.457 06:36:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.457 06:36:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:41.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:41.457 06:36:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.457 06:36:36 -- common/autotest_common.sh@10 -- # set +x 00:11:41.716 [2024-12-05 06:36:36.945460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:41.716 [2024-12-05 06:36:36.945742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76660 ] 00:11:41.716 [2024-12-05 06:36:37.082771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.716 [2024-12-05 06:36:37.116515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.722 06:36:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.722 06:36:37 -- common/autotest_common.sh@862 -- # return 0 00:11:42.722 06:36:37 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:11:42.985 [2024-12-05 06:36:38.171931] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:42.985 [2024-12-05 06:36:38.173697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2373450 (9): Bad file descriptor 00:11:42.985 [2024-12-05 06:36:38.174691] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:42.985 [2024-12-05 06:36:38.174707] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:42.985 [2024-12-05 06:36:38.174717] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:42.985 request: 00:11:42.985 { 00:11:42.985 "name": "TLSTEST", 00:11:42.985 "trtype": "tcp", 00:11:42.985 "traddr": "10.0.0.2", 00:11:42.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:42.985 "adrfam": "ipv4", 00:11:42.985 "trsvcid": "4420", 00:11:42.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.985 "method": "bdev_nvme_attach_controller", 00:11:42.985 "req_id": 1 00:11:42.985 } 00:11:42.985 Got JSON-RPC error response 00:11:42.985 response: 00:11:42.985 { 00:11:42.985 "code": -32602, 00:11:42.985 "message": "Invalid parameters" 00:11:42.985 } 00:11:42.985 06:36:38 -- target/tls.sh@36 -- # killprocess 76660 00:11:42.985 06:36:38 -- common/autotest_common.sh@936 -- # '[' -z 76660 ']' 00:11:42.985 06:36:38 -- common/autotest_common.sh@940 -- # kill -0 76660 00:11:42.985 06:36:38 -- common/autotest_common.sh@941 -- # uname 00:11:42.985 06:36:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:42.985 06:36:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76660 00:11:42.985 killing process with pid 76660 00:11:42.985 Received shutdown signal, test time was about 10.000000 seconds 00:11:42.985 00:11:42.985 Latency(us) 00:11:42.985 [2024-12-05T06:36:38.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.986 [2024-12-05T06:36:38.452Z] =================================================================================================================== 00:11:42.986 [2024-12-05T06:36:38.452Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:42.986 06:36:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:42.986 06:36:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:42.986 06:36:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76660' 00:11:42.986 06:36:38 -- common/autotest_common.sh@955 -- # kill 76660 00:11:42.986 06:36:38 -- common/autotest_common.sh@960 -- # wait 76660 00:11:42.986 06:36:38 -- target/tls.sh@37 -- # return 1 00:11:42.986 06:36:38 -- common/autotest_common.sh@653 -- # es=1 00:11:42.986 06:36:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:42.986 06:36:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:42.986 06:36:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:42.986 06:36:38 -- target/tls.sh@167 -- # killprocess 76222 00:11:42.986 06:36:38 -- common/autotest_common.sh@936 -- # '[' -z 76222 ']' 00:11:42.986 06:36:38 -- common/autotest_common.sh@940 -- # kill -0 76222 00:11:42.986 06:36:38 -- common/autotest_common.sh@941 -- # uname 00:11:42.986 06:36:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:42.986 06:36:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76222 00:11:42.986 killing process with pid 76222 00:11:42.986 06:36:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:42.986 06:36:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:42.986 06:36:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76222' 00:11:42.986 06:36:38 -- common/autotest_common.sh@955 -- # kill 76222 00:11:42.986 06:36:38 -- common/autotest_common.sh@960 -- # wait 76222 00:11:43.245 06:36:38 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:11:43.245 06:36:38 -- target/tls.sh@49 -- # local key hash crc 00:11:43.245 06:36:38 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:11:43.245 06:36:38 -- target/tls.sh@51 -- # hash=02 00:11:43.245 06:36:38 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:11:43.245 06:36:38 -- target/tls.sh@52 -- # gzip -1 -c 00:11:43.245 06:36:38 -- target/tls.sh@52 -- # head -c 4 00:11:43.245 06:36:38 -- target/tls.sh@52 -- # tail -c8 00:11:43.245 06:36:38 -- target/tls.sh@52 -- # crc='�e�'\''' 00:11:43.245 06:36:38 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:43.245 06:36:38 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:11:43.245 06:36:38 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:43.245 06:36:38 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:43.245 06:36:38 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:43.245 06:36:38 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:43.245 06:36:38 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:43.245 06:36:38 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:11:43.245 06:36:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:43.245 06:36:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:43.245 06:36:38 -- common/autotest_common.sh@10 -- # set +x 00:11:43.245 06:36:38 -- nvmf/common.sh@469 -- # nvmfpid=76710 00:11:43.245 06:36:38 -- nvmf/common.sh@470 -- # waitforlisten 76710 00:11:43.245 06:36:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:43.245 06:36:38 -- common/autotest_common.sh@829 -- # '[' -z 76710 ']' 00:11:43.245 06:36:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.245 06:36:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.245 06:36:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.245 06:36:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.245 06:36:38 -- common/autotest_common.sh@10 -- # set +x 00:11:43.245 [2024-12-05 06:36:38.627906] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:43.245 [2024-12-05 06:36:38.628008] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.528 [2024-12-05 06:36:38.760377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.528 [2024-12-05 06:36:38.793409] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:43.528 [2024-12-05 06:36:38.793557] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.528 [2024-12-05 06:36:38.793569] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.528 [2024-12-05 06:36:38.793577] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.529 [2024-12-05 06:36:38.793606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.529 06:36:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.529 06:36:38 -- common/autotest_common.sh@862 -- # return 0 00:11:43.529 06:36:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:43.529 06:36:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:43.529 06:36:38 -- common/autotest_common.sh@10 -- # set +x 00:11:43.529 06:36:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.529 06:36:38 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:43.529 06:36:38 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:43.529 06:36:38 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:43.787 [2024-12-05 06:36:39.149432] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.787 06:36:39 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:44.046 06:36:39 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:44.305 [2024-12-05 06:36:39.717635] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:44.305 [2024-12-05 06:36:39.717880] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.305 06:36:39 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:44.565 malloc0 00:11:44.565 06:36:40 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:44.824 06:36:40 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:45.083 06:36:40 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:45.083 06:36:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:45.083 06:36:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:45.083 06:36:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:45.083 06:36:40 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:45.083 06:36:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:45.083 06:36:40 -- target/tls.sh@28 -- # bdevperf_pid=76746 00:11:45.083 06:36:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:45.083 06:36:40 -- target/tls.sh@31 -- # waitforlisten 76746 /var/tmp/bdevperf.sock 00:11:45.083 06:36:40 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:45.083 06:36:40 -- common/autotest_common.sh@829 -- # '[' -z 76746 ']' 00:11:45.083 06:36:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:45.083 06:36:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:45.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:45.083 06:36:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:45.083 06:36:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:45.083 06:36:40 -- common/autotest_common.sh@10 -- # set +x 00:11:45.083 [2024-12-05 06:36:40.532535] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:45.083 [2024-12-05 06:36:40.532631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76746 ] 00:11:45.343 [2024-12-05 06:36:40.668380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.343 [2024-12-05 06:36:40.702788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.279 06:36:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:46.279 06:36:41 -- common/autotest_common.sh@862 -- # return 0 00:11:46.279 06:36:41 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:46.279 [2024-12-05 06:36:41.699602] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:46.538 TLSTESTn1 00:11:46.538 06:36:41 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:46.538 Running I/O for 10 seconds... 00:11:56.531 00:11:56.531 Latency(us) 00:11:56.531 [2024-12-05T06:36:51.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.531 [2024-12-05T06:36:51.997Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:56.531 Verification LBA range: start 0x0 length 0x2000 00:11:56.531 TLSTESTn1 : 10.01 5750.75 22.46 0.00 0.00 22222.74 4230.05 21805.61 00:11:56.531 [2024-12-05T06:36:51.997Z] =================================================================================================================== 00:11:56.531 [2024-12-05T06:36:51.997Z] Total : 5750.75 22.46 0.00 0.00 22222.74 4230.05 21805.61 00:11:56.531 0 00:11:56.531 06:36:51 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:56.531 06:36:51 -- target/tls.sh@45 -- # killprocess 76746 00:11:56.531 06:36:51 -- common/autotest_common.sh@936 -- # '[' -z 76746 ']' 00:11:56.531 06:36:51 -- common/autotest_common.sh@940 -- # kill -0 76746 00:11:56.531 06:36:51 -- common/autotest_common.sh@941 -- # uname 00:11:56.531 06:36:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:56.531 06:36:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76746 00:11:56.531 killing process with pid 76746 00:11:56.531 Received shutdown signal, test time was about 10.000000 seconds 00:11:56.531 00:11:56.531 Latency(us) 00:11:56.531 [2024-12-05T06:36:51.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.531 [2024-12-05T06:36:51.997Z] =================================================================================================================== 00:11:56.531 [2024-12-05T06:36:51.997Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:56.531 06:36:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:56.531 06:36:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:56.531 06:36:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76746' 00:11:56.531 06:36:51 -- common/autotest_common.sh@955 -- # kill 76746 00:11:56.531 06:36:51 -- common/autotest_common.sh@960 -- # wait 76746 00:11:56.791 06:36:52 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:56.791 06:36:52 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:56.791 06:36:52 -- common/autotest_common.sh@650 -- # local es=0 00:11:56.791 06:36:52 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:56.791 06:36:52 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:56.791 06:36:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.791 06:36:52 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:56.791 06:36:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.791 06:36:52 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:56.791 06:36:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:56.791 06:36:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:56.791 06:36:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:56.791 06:36:52 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:56.791 06:36:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:56.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:56.791 06:36:52 -- target/tls.sh@28 -- # bdevperf_pid=76888 00:11:56.791 06:36:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:56.791 06:36:52 -- target/tls.sh@31 -- # waitforlisten 76888 /var/tmp/bdevperf.sock 00:11:56.791 06:36:52 -- common/autotest_common.sh@829 -- # '[' -z 76888 ']' 00:11:56.791 06:36:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:56.791 06:36:52 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:56.791 06:36:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:56.791 06:36:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:56.791 06:36:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:56.791 06:36:52 -- common/autotest_common.sh@10 -- # set +x 00:11:56.791 [2024-12-05 06:36:52.186886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:56.791 [2024-12-05 06:36:52.186986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76888 ] 00:11:57.051 [2024-12-05 06:36:52.319174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.051 [2024-12-05 06:36:52.351213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.987 06:36:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:57.987 06:36:53 -- common/autotest_common.sh@862 -- # return 0 00:11:57.987 06:36:53 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:58.265 [2024-12-05 06:36:53.489844] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:58.265 [2024-12-05 06:36:53.489917] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:58.265 request: 00:11:58.265 { 00:11:58.265 "name": "TLSTEST", 00:11:58.265 "trtype": "tcp", 00:11:58.265 "traddr": "10.0.0.2", 00:11:58.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:58.265 "adrfam": "ipv4", 00:11:58.265 "trsvcid": "4420", 00:11:58.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:58.265 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:58.265 "method": "bdev_nvme_attach_controller", 00:11:58.265 "req_id": 1 00:11:58.265 } 00:11:58.265 Got JSON-RPC error response 00:11:58.265 response: 00:11:58.265 { 00:11:58.265 "code": -22, 00:11:58.265 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:58.265 } 00:11:58.265 06:36:53 -- target/tls.sh@36 -- # killprocess 76888 00:11:58.265 06:36:53 -- common/autotest_common.sh@936 -- # '[' -z 76888 ']' 00:11:58.265 06:36:53 -- common/autotest_common.sh@940 -- # kill -0 76888 00:11:58.265 06:36:53 -- common/autotest_common.sh@941 -- # uname 00:11:58.265 06:36:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:58.265 06:36:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76888 00:11:58.265 killing process with pid 76888 00:11:58.265 Received shutdown signal, test time was about 10.000000 seconds 00:11:58.265 00:11:58.265 Latency(us) 00:11:58.265 [2024-12-05T06:36:53.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.265 [2024-12-05T06:36:53.731Z] =================================================================================================================== 00:11:58.265 [2024-12-05T06:36:53.731Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:58.265 06:36:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:58.265 06:36:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:58.265 06:36:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76888' 00:11:58.265 06:36:53 -- common/autotest_common.sh@955 -- # kill 76888 00:11:58.265 06:36:53 -- common/autotest_common.sh@960 -- # wait 76888 00:11:58.265 06:36:53 -- target/tls.sh@37 -- # return 1 00:11:58.265 06:36:53 -- common/autotest_common.sh@653 -- # es=1 00:11:58.265 06:36:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:58.265 06:36:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:58.265 06:36:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:58.265 06:36:53 -- target/tls.sh@183 -- # killprocess 76710 00:11:58.265 06:36:53 -- common/autotest_common.sh@936 -- # '[' -z 76710 ']' 00:11:58.265 06:36:53 -- common/autotest_common.sh@940 -- # kill -0 76710 00:11:58.265 06:36:53 -- common/autotest_common.sh@941 -- # uname 00:11:58.265 06:36:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:58.265 06:36:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76710 00:11:58.573 killing process with pid 76710 00:11:58.573 06:36:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:58.573 06:36:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:58.573 06:36:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76710' 00:11:58.573 06:36:53 -- common/autotest_common.sh@955 -- # kill 76710 00:11:58.573 06:36:53 -- common/autotest_common.sh@960 -- # wait 76710 00:11:58.573 06:36:53 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:11:58.573 06:36:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:58.573 06:36:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:58.573 06:36:53 -- common/autotest_common.sh@10 -- # set +x 00:11:58.573 06:36:53 -- nvmf/common.sh@469 -- # nvmfpid=76925 00:11:58.573 06:36:53 -- nvmf/common.sh@470 -- # waitforlisten 76925 00:11:58.573 06:36:53 -- common/autotest_common.sh@829 -- # '[' -z 76925 ']' 00:11:58.573 06:36:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:58.573 06:36:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.573 06:36:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:58.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.573 06:36:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.573 06:36:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:58.573 06:36:53 -- common/autotest_common.sh@10 -- # set +x 00:11:58.573 [2024-12-05 06:36:53.919519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:58.573 [2024-12-05 06:36:53.919804] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.833 [2024-12-05 06:36:54.056110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.833 [2024-12-05 06:36:54.089995] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:58.833 [2024-12-05 06:36:54.090144] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.833 [2024-12-05 06:36:54.090157] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.833 [2024-12-05 06:36:54.090165] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.833 [2024-12-05 06:36:54.090195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.402 06:36:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:59.402 06:36:54 -- common/autotest_common.sh@862 -- # return 0 00:11:59.403 06:36:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:59.403 06:36:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:59.403 06:36:54 -- common/autotest_common.sh@10 -- # set +x 00:11:59.661 06:36:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.661 06:36:54 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:59.661 06:36:54 -- common/autotest_common.sh@650 -- # local es=0 00:11:59.661 06:36:54 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:59.661 06:36:54 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:11:59.661 06:36:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:59.661 06:36:54 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:11:59.661 06:36:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:59.661 06:36:54 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:59.661 06:36:54 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:59.661 06:36:54 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:59.919 [2024-12-05 06:36:55.151227] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.919 06:36:55 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:00.176 06:36:55 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:00.176 [2024-12-05 06:36:55.631358] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:00.176 [2024-12-05 06:36:55.631661] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.434 06:36:55 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:00.434 malloc0 00:12:00.692 06:36:55 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:00.950 06:36:56 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:00.950 [2024-12-05 06:36:56.370039] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:00.950 [2024-12-05 06:36:56.370096] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:12:00.950 [2024-12-05 06:36:56.370130] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:12:00.950 request: 00:12:00.950 { 00:12:00.950 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:00.950 "host": "nqn.2016-06.io.spdk:host1", 00:12:00.950 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:00.950 "method": "nvmf_subsystem_add_host", 00:12:00.950 "req_id": 1 00:12:00.950 } 00:12:00.950 Got JSON-RPC error response 00:12:00.950 response: 00:12:00.950 { 00:12:00.950 "code": -32603, 00:12:00.950 "message": "Internal error" 00:12:00.950 } 00:12:00.950 06:36:56 -- common/autotest_common.sh@653 -- # es=1 00:12:00.950 06:36:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:00.950 06:36:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:00.950 06:36:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:00.950 06:36:56 -- target/tls.sh@189 -- # killprocess 76925 00:12:00.950 06:36:56 -- common/autotest_common.sh@936 -- # '[' -z 76925 ']' 00:12:00.950 06:36:56 -- common/autotest_common.sh@940 -- # kill -0 76925 00:12:00.950 06:36:56 -- common/autotest_common.sh@941 -- # uname 00:12:00.950 06:36:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:00.950 06:36:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76925 00:12:01.208 06:36:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:01.208 06:36:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:01.208 killing process with pid 76925 00:12:01.208 06:36:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76925' 00:12:01.208 06:36:56 -- common/autotest_common.sh@955 -- # kill 76925 00:12:01.208 06:36:56 -- common/autotest_common.sh@960 -- # wait 76925 00:12:01.208 06:36:56 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:01.208 06:36:56 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:12:01.208 06:36:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:01.208 06:36:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:01.208 06:36:56 -- common/autotest_common.sh@10 -- # set +x 00:12:01.208 06:36:56 -- nvmf/common.sh@469 -- # nvmfpid=76983 00:12:01.208 06:36:56 -- nvmf/common.sh@470 -- # waitforlisten 76983 00:12:01.208 06:36:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:01.208 06:36:56 -- common/autotest_common.sh@829 -- # '[' -z 76983 ']' 00:12:01.208 06:36:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.208 06:36:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.208 06:36:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.208 06:36:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.208 06:36:56 -- common/autotest_common.sh@10 -- # set +x 00:12:01.208 [2024-12-05 06:36:56.633556] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:01.208 [2024-12-05 06:36:56.633652] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.467 [2024-12-05 06:36:56.769107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.467 [2024-12-05 06:36:56.803835] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:01.467 [2024-12-05 06:36:56.804031] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.467 [2024-12-05 06:36:56.804042] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.467 [2024-12-05 06:36:56.804050] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.467 [2024-12-05 06:36:56.804073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.467 06:36:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:01.467 06:36:56 -- common/autotest_common.sh@862 -- # return 0 00:12:01.467 06:36:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:01.467 06:36:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:01.467 06:36:56 -- common/autotest_common.sh@10 -- # set +x 00:12:01.725 06:36:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.725 06:36:56 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:01.725 06:36:56 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:01.725 06:36:56 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:01.725 [2024-12-05 06:36:57.147509] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.725 06:36:57 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:01.984 06:36:57 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:02.243 [2024-12-05 06:36:57.619693] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:02.243 [2024-12-05 06:36:57.620013] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.243 06:36:57 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:02.502 malloc0 00:12:02.502 06:36:57 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:02.762 06:36:58 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:03.021 06:36:58 -- target/tls.sh@197 -- # bdevperf_pid=77031 00:12:03.021 06:36:58 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:03.021 06:36:58 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:03.021 06:36:58 -- target/tls.sh@200 -- # waitforlisten 77031 /var/tmp/bdevperf.sock 00:12:03.021 06:36:58 -- common/autotest_common.sh@829 -- # '[' -z 77031 ']' 00:12:03.021 06:36:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:03.021 06:36:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:03.021 06:36:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:03.021 06:36:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.021 06:36:58 -- common/autotest_common.sh@10 -- # set +x 00:12:03.021 [2024-12-05 06:36:58.423290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:03.021 [2024-12-05 06:36:58.423391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77031 ] 00:12:03.280 [2024-12-05 06:36:58.561518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.280 [2024-12-05 06:36:58.603491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.219 06:36:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.219 06:36:59 -- common/autotest_common.sh@862 -- # return 0 00:12:04.219 06:36:59 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:04.219 [2024-12-05 06:36:59.592094] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:04.219 TLSTESTn1 00:12:04.219 06:36:59 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:04.788 06:37:00 -- target/tls.sh@205 -- # tgtconf='{ 00:12:04.788 "subsystems": [ 00:12:04.788 { 00:12:04.788 "subsystem": "iobuf", 00:12:04.788 "config": [ 00:12:04.788 { 00:12:04.788 "method": "iobuf_set_options", 00:12:04.788 "params": { 00:12:04.788 "small_pool_count": 8192, 00:12:04.788 "large_pool_count": 1024, 00:12:04.788 "small_bufsize": 8192, 00:12:04.788 "large_bufsize": 135168 00:12:04.788 } 00:12:04.788 } 00:12:04.788 ] 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "subsystem": "sock", 00:12:04.788 "config": [ 00:12:04.788 { 00:12:04.788 "method": "sock_impl_set_options", 00:12:04.788 "params": { 00:12:04.788 "impl_name": "uring", 00:12:04.788 "recv_buf_size": 2097152, 00:12:04.788 "send_buf_size": 2097152, 00:12:04.788 "enable_recv_pipe": true, 00:12:04.788 "enable_quickack": false, 00:12:04.788 "enable_placement_id": 0, 00:12:04.788 "enable_zerocopy_send_server": false, 00:12:04.788 "enable_zerocopy_send_client": false, 00:12:04.788 "zerocopy_threshold": 0, 00:12:04.788 "tls_version": 0, 00:12:04.788 "enable_ktls": false 00:12:04.788 } 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "method": "sock_impl_set_options", 00:12:04.788 "params": { 00:12:04.788 "impl_name": "posix", 00:12:04.788 "recv_buf_size": 2097152, 00:12:04.788 "send_buf_size": 2097152, 00:12:04.788 "enable_recv_pipe": true, 00:12:04.788 "enable_quickack": false, 00:12:04.788 "enable_placement_id": 0, 00:12:04.788 "enable_zerocopy_send_server": true, 00:12:04.788 "enable_zerocopy_send_client": false, 00:12:04.788 "zerocopy_threshold": 0, 00:12:04.788 "tls_version": 0, 00:12:04.788 "enable_ktls": false 00:12:04.788 } 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "method": "sock_impl_set_options", 00:12:04.788 "params": { 00:12:04.788 "impl_name": "ssl", 00:12:04.788 "recv_buf_size": 4096, 00:12:04.788 "send_buf_size": 4096, 00:12:04.788 "enable_recv_pipe": true, 00:12:04.788 "enable_quickack": false, 00:12:04.788 "enable_placement_id": 0, 00:12:04.788 "enable_zerocopy_send_server": true, 00:12:04.788 "enable_zerocopy_send_client": false, 00:12:04.788 "zerocopy_threshold": 0, 00:12:04.788 "tls_version": 0, 00:12:04.788 "enable_ktls": false 00:12:04.788 } 00:12:04.788 } 00:12:04.788 ] 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "subsystem": "vmd", 00:12:04.788 "config": [] 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "subsystem": "accel", 00:12:04.788 "config": [ 00:12:04.788 { 00:12:04.788 "method": "accel_set_options", 00:12:04.788 "params": { 00:12:04.788 "small_cache_size": 128, 00:12:04.788 "large_cache_size": 16, 00:12:04.788 "task_count": 2048, 00:12:04.788 "sequence_count": 2048, 00:12:04.788 "buf_count": 2048 00:12:04.788 } 00:12:04.788 } 00:12:04.788 ] 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "subsystem": "bdev", 00:12:04.788 "config": [ 00:12:04.788 { 00:12:04.788 "method": "bdev_set_options", 00:12:04.788 "params": { 00:12:04.788 "bdev_io_pool_size": 65535, 00:12:04.788 "bdev_io_cache_size": 256, 00:12:04.788 "bdev_auto_examine": true, 00:12:04.788 "iobuf_small_cache_size": 128, 00:12:04.788 "iobuf_large_cache_size": 16 00:12:04.788 } 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "method": "bdev_raid_set_options", 00:12:04.788 "params": { 00:12:04.788 "process_window_size_kb": 1024 00:12:04.788 } 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "method": "bdev_iscsi_set_options", 00:12:04.788 "params": { 00:12:04.788 "timeout_sec": 30 00:12:04.788 } 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "method": "bdev_nvme_set_options", 00:12:04.788 "params": { 00:12:04.788 "action_on_timeout": "none", 00:12:04.788 "timeout_us": 0, 00:12:04.788 "timeout_admin_us": 0, 00:12:04.788 "keep_alive_timeout_ms": 10000, 00:12:04.788 "transport_retry_count": 4, 00:12:04.788 "arbitration_burst": 0, 00:12:04.788 "low_priority_weight": 0, 00:12:04.788 "medium_priority_weight": 0, 00:12:04.788 "high_priority_weight": 0, 00:12:04.788 "nvme_adminq_poll_period_us": 10000, 00:12:04.788 "nvme_ioq_poll_period_us": 0, 00:12:04.788 "io_queue_requests": 0, 00:12:04.788 "delay_cmd_submit": true, 00:12:04.788 "bdev_retry_count": 3, 00:12:04.788 "transport_ack_timeout": 0, 00:12:04.788 "ctrlr_loss_timeout_sec": 0, 00:12:04.788 "reconnect_delay_sec": 0, 00:12:04.788 "fast_io_fail_timeout_sec": 0, 00:12:04.788 "generate_uuids": false, 00:12:04.788 "transport_tos": 0, 00:12:04.788 "io_path_stat": false, 00:12:04.788 "allow_accel_sequence": false 00:12:04.788 } 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "method": "bdev_nvme_set_hotplug", 00:12:04.788 "params": { 00:12:04.788 "period_us": 100000, 00:12:04.788 "enable": false 00:12:04.788 } 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "method": "bdev_malloc_create", 00:12:04.788 "params": { 00:12:04.788 "name": "malloc0", 00:12:04.788 "num_blocks": 8192, 00:12:04.788 "block_size": 4096, 00:12:04.788 "physical_block_size": 4096, 00:12:04.788 "uuid": "39f94f1e-94f6-4b1c-8a75-751e1c55fc1b", 00:12:04.788 "optimal_io_boundary": 0 00:12:04.788 } 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "method": "bdev_wait_for_examine" 00:12:04.788 } 00:12:04.788 ] 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "subsystem": "nbd", 00:12:04.788 "config": [] 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "subsystem": "scheduler", 00:12:04.788 "config": [ 00:12:04.788 { 00:12:04.788 "method": "framework_set_scheduler", 00:12:04.788 "params": { 00:12:04.788 "name": "static" 00:12:04.788 } 00:12:04.788 } 00:12:04.788 ] 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "subsystem": "nvmf", 00:12:04.788 "config": [ 00:12:04.788 { 00:12:04.788 "method": "nvmf_set_config", 00:12:04.788 "params": { 00:12:04.788 "discovery_filter": "match_any", 00:12:04.788 "admin_cmd_passthru": { 00:12:04.788 "identify_ctrlr": false 00:12:04.788 } 00:12:04.788 } 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "method": "nvmf_set_max_subsystems", 00:12:04.788 "params": { 00:12:04.788 "max_subsystems": 1024 00:12:04.788 } 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "method": "nvmf_set_crdt", 00:12:04.788 "params": { 00:12:04.788 "crdt1": 0, 00:12:04.788 "crdt2": 0, 00:12:04.788 "crdt3": 0 00:12:04.788 } 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "method": "nvmf_create_transport", 00:12:04.788 "params": { 00:12:04.788 "trtype": "TCP", 00:12:04.788 "max_queue_depth": 128, 00:12:04.788 "max_io_qpairs_per_ctrlr": 127, 00:12:04.788 "in_capsule_data_size": 4096, 00:12:04.788 "max_io_size": 131072, 00:12:04.788 "io_unit_size": 131072, 00:12:04.788 "max_aq_depth": 128, 00:12:04.788 "num_shared_buffers": 511, 00:12:04.788 "buf_cache_size": 4294967295, 00:12:04.788 "dif_insert_or_strip": false, 00:12:04.788 "zcopy": false, 00:12:04.788 "c2h_success": false, 00:12:04.788 "sock_priority": 0, 00:12:04.788 "abort_timeout_sec": 1 00:12:04.788 } 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "method": "nvmf_create_subsystem", 00:12:04.788 "params": { 00:12:04.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.788 "allow_any_host": false, 00:12:04.788 "serial_number": "SPDK00000000000001", 00:12:04.788 "model_number": "SPDK bdev Controller", 00:12:04.789 "max_namespaces": 10, 00:12:04.789 "min_cntlid": 1, 00:12:04.789 "max_cntlid": 65519, 00:12:04.789 "ana_reporting": false 00:12:04.789 } 00:12:04.789 }, 00:12:04.789 { 00:12:04.789 "method": "nvmf_subsystem_add_host", 00:12:04.789 "params": { 00:12:04.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.789 "host": "nqn.2016-06.io.spdk:host1", 00:12:04.789 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:04.789 } 00:12:04.789 }, 00:12:04.789 { 00:12:04.789 "method": "nvmf_subsystem_add_ns", 00:12:04.789 "params": { 00:12:04.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.789 "namespace": { 00:12:04.789 "nsid": 1, 00:12:04.789 "bdev_name": "malloc0", 00:12:04.789 "nguid": "39F94F1E94F64B1C8A75751E1C55FC1B", 00:12:04.789 "uuid": "39f94f1e-94f6-4b1c-8a75-751e1c55fc1b" 00:12:04.789 } 00:12:04.789 } 00:12:04.789 }, 00:12:04.789 { 00:12:04.789 "method": "nvmf_subsystem_add_listener", 00:12:04.789 "params": { 00:12:04.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.789 "listen_address": { 00:12:04.789 "trtype": "TCP", 00:12:04.789 "adrfam": "IPv4", 00:12:04.789 "traddr": "10.0.0.2", 00:12:04.789 "trsvcid": "4420" 00:12:04.789 }, 00:12:04.789 "secure_channel": true 00:12:04.789 } 00:12:04.789 } 00:12:04.789 ] 00:12:04.789 } 00:12:04.789 ] 00:12:04.789 }' 00:12:04.789 06:37:00 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:05.048 06:37:00 -- target/tls.sh@206 -- # bdevperfconf='{ 00:12:05.049 "subsystems": [ 00:12:05.049 { 00:12:05.049 "subsystem": "iobuf", 00:12:05.049 "config": [ 00:12:05.049 { 00:12:05.049 "method": "iobuf_set_options", 00:12:05.049 "params": { 00:12:05.049 "small_pool_count": 8192, 00:12:05.049 "large_pool_count": 1024, 00:12:05.049 "small_bufsize": 8192, 00:12:05.049 "large_bufsize": 135168 00:12:05.049 } 00:12:05.049 } 00:12:05.049 ] 00:12:05.049 }, 00:12:05.049 { 00:12:05.049 "subsystem": "sock", 00:12:05.049 "config": [ 00:12:05.049 { 00:12:05.049 "method": "sock_impl_set_options", 00:12:05.049 "params": { 00:12:05.049 "impl_name": "uring", 00:12:05.049 "recv_buf_size": 2097152, 00:12:05.049 "send_buf_size": 2097152, 00:12:05.049 "enable_recv_pipe": true, 00:12:05.049 "enable_quickack": false, 00:12:05.049 "enable_placement_id": 0, 00:12:05.049 "enable_zerocopy_send_server": false, 00:12:05.049 "enable_zerocopy_send_client": false, 00:12:05.049 "zerocopy_threshold": 0, 00:12:05.049 "tls_version": 0, 00:12:05.049 "enable_ktls": false 00:12:05.049 } 00:12:05.049 }, 00:12:05.049 { 00:12:05.049 "method": "sock_impl_set_options", 00:12:05.049 "params": { 00:12:05.049 "impl_name": "posix", 00:12:05.049 "recv_buf_size": 2097152, 00:12:05.049 "send_buf_size": 2097152, 00:12:05.049 "enable_recv_pipe": true, 00:12:05.049 "enable_quickack": false, 00:12:05.049 "enable_placement_id": 0, 00:12:05.049 "enable_zerocopy_send_server": true, 00:12:05.049 "enable_zerocopy_send_client": false, 00:12:05.049 "zerocopy_threshold": 0, 00:12:05.049 "tls_version": 0, 00:12:05.049 "enable_ktls": false 00:12:05.049 } 00:12:05.049 }, 00:12:05.049 { 00:12:05.049 "method": "sock_impl_set_options", 00:12:05.049 "params": { 00:12:05.049 "impl_name": "ssl", 00:12:05.049 "recv_buf_size": 4096, 00:12:05.049 "send_buf_size": 4096, 00:12:05.049 "enable_recv_pipe": true, 00:12:05.049 "enable_quickack": false, 00:12:05.049 "enable_placement_id": 0, 00:12:05.049 "enable_zerocopy_send_server": true, 00:12:05.049 "enable_zerocopy_send_client": false, 00:12:05.049 "zerocopy_threshold": 0, 00:12:05.049 "tls_version": 0, 00:12:05.049 "enable_ktls": false 00:12:05.049 } 00:12:05.049 } 00:12:05.049 ] 00:12:05.049 }, 00:12:05.049 { 00:12:05.049 "subsystem": "vmd", 00:12:05.049 "config": [] 00:12:05.049 }, 00:12:05.049 { 00:12:05.049 "subsystem": "accel", 00:12:05.049 "config": [ 00:12:05.049 { 00:12:05.049 "method": "accel_set_options", 00:12:05.049 "params": { 00:12:05.049 "small_cache_size": 128, 00:12:05.049 "large_cache_size": 16, 00:12:05.049 "task_count": 2048, 00:12:05.049 "sequence_count": 2048, 00:12:05.049 "buf_count": 2048 00:12:05.049 } 00:12:05.049 } 00:12:05.049 ] 00:12:05.049 }, 00:12:05.049 { 00:12:05.049 "subsystem": "bdev", 00:12:05.049 "config": [ 00:12:05.049 { 00:12:05.049 "method": "bdev_set_options", 00:12:05.049 "params": { 00:12:05.049 "bdev_io_pool_size": 65535, 00:12:05.049 "bdev_io_cache_size": 256, 00:12:05.049 "bdev_auto_examine": true, 00:12:05.049 "iobuf_small_cache_size": 128, 00:12:05.049 "iobuf_large_cache_size": 16 00:12:05.049 } 00:12:05.049 }, 00:12:05.049 { 00:12:05.049 "method": "bdev_raid_set_options", 00:12:05.049 "params": { 00:12:05.049 "process_window_size_kb": 1024 00:12:05.049 } 00:12:05.049 }, 00:12:05.049 { 00:12:05.049 "method": "bdev_iscsi_set_options", 00:12:05.049 "params": { 00:12:05.049 "timeout_sec": 30 00:12:05.049 } 00:12:05.049 }, 00:12:05.049 { 00:12:05.049 "method": "bdev_nvme_set_options", 00:12:05.049 "params": { 00:12:05.049 "action_on_timeout": "none", 00:12:05.049 "timeout_us": 0, 00:12:05.049 "timeout_admin_us": 0, 00:12:05.049 "keep_alive_timeout_ms": 10000, 00:12:05.049 "transport_retry_count": 4, 00:12:05.049 "arbitration_burst": 0, 00:12:05.049 "low_priority_weight": 0, 00:12:05.049 "medium_priority_weight": 0, 00:12:05.049 "high_priority_weight": 0, 00:12:05.049 "nvme_adminq_poll_period_us": 10000, 00:12:05.049 "nvme_ioq_poll_period_us": 0, 00:12:05.049 "io_queue_requests": 512, 00:12:05.049 "delay_cmd_submit": true, 00:12:05.049 "bdev_retry_count": 3, 00:12:05.049 "transport_ack_timeout": 0, 00:12:05.049 "ctrlr_loss_timeout_sec": 0, 00:12:05.049 "reconnect_delay_sec": 0, 00:12:05.049 "fast_io_fail_timeout_sec": 0, 00:12:05.049 "generate_uuids": false, 00:12:05.049 "transport_tos": 0, 00:12:05.049 "io_path_stat": false, 00:12:05.049 "allow_accel_sequence": false 00:12:05.049 } 00:12:05.049 }, 00:12:05.049 { 00:12:05.049 "method": "bdev_nvme_attach_controller", 00:12:05.049 "params": { 00:12:05.049 "name": "TLSTEST", 00:12:05.049 "trtype": "TCP", 00:12:05.049 "adrfam": "IPv4", 00:12:05.049 "traddr": "10.0.0.2", 00:12:05.049 "trsvcid": "4420", 00:12:05.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.049 "prchk_reftag": false, 00:12:05.049 "prchk_guard": false, 00:12:05.049 "ctrlr_loss_timeout_sec": 0, 00:12:05.049 "reconnect_delay_sec": 0, 00:12:05.049 "fast_io_fail_timeout_sec": 0, 00:12:05.049 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:05.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:05.049 "hdgst": false, 00:12:05.049 "ddgst": false 00:12:05.049 } 00:12:05.049 }, 00:12:05.049 { 00:12:05.049 "method": "bdev_nvme_set_hotplug", 00:12:05.049 "params": { 00:12:05.049 "period_us": 100000, 00:12:05.049 "enable": false 00:12:05.049 } 00:12:05.049 }, 00:12:05.049 { 00:12:05.049 "method": "bdev_wait_for_examine" 00:12:05.049 } 00:12:05.049 ] 00:12:05.049 }, 00:12:05.049 { 00:12:05.049 "subsystem": "nbd", 00:12:05.049 "config": [] 00:12:05.049 } 00:12:05.049 ] 00:12:05.049 }' 00:12:05.049 06:37:00 -- target/tls.sh@208 -- # killprocess 77031 00:12:05.049 06:37:00 -- common/autotest_common.sh@936 -- # '[' -z 77031 ']' 00:12:05.049 06:37:00 -- common/autotest_common.sh@940 -- # kill -0 77031 00:12:05.049 06:37:00 -- common/autotest_common.sh@941 -- # uname 00:12:05.049 06:37:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:05.049 06:37:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77031 00:12:05.049 killing process with pid 77031 00:12:05.049 Received shutdown signal, test time was about 10.000000 seconds 00:12:05.049 00:12:05.049 Latency(us) 00:12:05.049 [2024-12-05T06:37:00.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:05.049 [2024-12-05T06:37:00.515Z] =================================================================================================================== 00:12:05.049 [2024-12-05T06:37:00.515Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:05.049 06:37:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:05.049 06:37:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:05.049 06:37:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77031' 00:12:05.049 06:37:00 -- common/autotest_common.sh@955 -- # kill 77031 00:12:05.049 06:37:00 -- common/autotest_common.sh@960 -- # wait 77031 00:12:05.309 06:37:00 -- target/tls.sh@209 -- # killprocess 76983 00:12:05.309 06:37:00 -- common/autotest_common.sh@936 -- # '[' -z 76983 ']' 00:12:05.309 06:37:00 -- common/autotest_common.sh@940 -- # kill -0 76983 00:12:05.309 06:37:00 -- common/autotest_common.sh@941 -- # uname 00:12:05.309 06:37:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:05.309 06:37:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76983 00:12:05.309 killing process with pid 76983 00:12:05.309 06:37:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:05.309 06:37:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:05.309 06:37:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76983' 00:12:05.309 06:37:00 -- common/autotest_common.sh@955 -- # kill 76983 00:12:05.309 06:37:00 -- common/autotest_common.sh@960 -- # wait 76983 00:12:05.309 06:37:00 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:12:05.309 06:37:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:05.309 06:37:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:05.309 06:37:00 -- target/tls.sh@212 -- # echo '{ 00:12:05.309 "subsystems": [ 00:12:05.309 { 00:12:05.309 "subsystem": "iobuf", 00:12:05.309 "config": [ 00:12:05.309 { 00:12:05.309 "method": "iobuf_set_options", 00:12:05.309 "params": { 00:12:05.309 "small_pool_count": 8192, 00:12:05.309 "large_pool_count": 1024, 00:12:05.309 "small_bufsize": 8192, 00:12:05.309 "large_bufsize": 135168 00:12:05.309 } 00:12:05.309 } 00:12:05.309 ] 00:12:05.309 }, 00:12:05.309 { 00:12:05.309 "subsystem": "sock", 00:12:05.309 "config": [ 00:12:05.309 { 00:12:05.309 "method": "sock_impl_set_options", 00:12:05.309 "params": { 00:12:05.309 "impl_name": "uring", 00:12:05.309 "recv_buf_size": 2097152, 00:12:05.309 "send_buf_size": 2097152, 00:12:05.309 "enable_recv_pipe": true, 00:12:05.309 "enable_quickack": false, 00:12:05.309 "enable_placement_id": 0, 00:12:05.309 "enable_zerocopy_send_server": false, 00:12:05.309 "enable_zerocopy_send_client": false, 00:12:05.309 "zerocopy_threshold": 0, 00:12:05.309 "tls_version": 0, 00:12:05.309 "enable_ktls": false 00:12:05.310 } 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "method": "sock_impl_set_options", 00:12:05.310 "params": { 00:12:05.310 "impl_name": "posix", 00:12:05.310 "recv_buf_size": 2097152, 00:12:05.310 "send_buf_size": 2097152, 00:12:05.310 "enable_recv_pipe": true, 00:12:05.310 "enable_quickack": false, 00:12:05.310 "enable_placement_id": 0, 00:12:05.310 "enable_zerocopy_send_server": true, 00:12:05.310 "enable_zerocopy_send_client": false, 00:12:05.310 "zerocopy_threshold": 0, 00:12:05.310 "tls_version": 0, 00:12:05.310 "enable_ktls": false 00:12:05.310 } 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "method": "sock_impl_set_options", 00:12:05.310 "params": { 00:12:05.310 "impl_name": "ssl", 00:12:05.310 "recv_buf_size": 4096, 00:12:05.310 "send_buf_size": 4096, 00:12:05.310 "enable_recv_pipe": true, 00:12:05.310 "enable_quickack": false, 00:12:05.310 "enable_placement_id": 0, 00:12:05.310 "enable_zerocopy_send_server": true, 00:12:05.310 "enable_zerocopy_send_client": false, 00:12:05.310 "zerocopy_threshold": 0, 00:12:05.310 "tls_version": 0, 00:12:05.310 "enable_ktls": false 00:12:05.310 } 00:12:05.310 } 00:12:05.310 ] 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "subsystem": "vmd", 00:12:05.310 "config": [] 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "subsystem": "accel", 00:12:05.310 "config": [ 00:12:05.310 { 00:12:05.310 "method": "accel_set_options", 00:12:05.310 "params": { 00:12:05.310 "small_cache_size": 128, 00:12:05.310 "large_cache_size": 16, 00:12:05.310 "task_count": 2048, 00:12:05.310 "sequence_count": 2048, 00:12:05.310 "buf_count": 2048 00:12:05.310 } 00:12:05.310 } 00:12:05.310 ] 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "subsystem": "bdev", 00:12:05.310 "config": [ 00:12:05.310 { 00:12:05.310 "method": "bdev_set_options", 00:12:05.310 "params": { 00:12:05.310 "bdev_io_pool_size": 65535, 00:12:05.310 "bdev_io_cache_size": 256, 00:12:05.310 "bdev_auto_examine": true, 00:12:05.310 "iobuf_small_cache_size": 128, 00:12:05.310 "iobuf_large_cache_size": 16 00:12:05.310 } 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "method": "bdev_raid_set_options", 00:12:05.310 "params": { 00:12:05.310 "process_window_size_kb": 1024 00:12:05.310 } 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "method": "bdev_iscsi_set_options", 00:12:05.310 "params": { 00:12:05.310 "timeout_sec": 30 00:12:05.310 } 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "method": "bdev_nvme_set_options", 00:12:05.310 "params": { 00:12:05.310 "action_on_timeout": "none", 00:12:05.310 "timeout_us": 0, 00:12:05.310 "timeout_admin_us": 0, 00:12:05.310 "keep_alive_timeout_ms": 10000, 00:12:05.310 "transport_retry_count": 4, 00:12:05.310 "arbitration_burst": 0, 00:12:05.310 "low_priority_weight": 0, 00:12:05.310 "medium_priority_weight": 0, 00:12:05.310 "high_priority_weight": 0, 00:12:05.310 "nvme_adminq_poll_period_us": 10000, 00:12:05.310 "nvme_ioq_poll_period_us": 0, 00:12:05.310 "io_queue_requests": 0, 00:12:05.310 "delay_cmd_submit": true, 00:12:05.310 "bdev_retry_count": 3, 00:12:05.310 "transport_ack_timeout": 0, 00:12:05.310 "ctrlr_loss_timeout_sec": 0, 00:12:05.310 "reconnect_delay_sec": 0, 00:12:05.310 "fast_io_fail_timeout_sec": 0, 00:12:05.310 "generate_uuids": false, 00:12:05.310 "transport_tos": 0, 00:12:05.310 "io_path_stat": false, 00:12:05.310 "allow_accel_sequence": false 00:12:05.310 } 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "method": "bdev_nvme_set_hotplug", 00:12:05.310 "params": { 00:12:05.310 "period_us": 100000, 00:12:05.310 "enable": false 00:12:05.310 } 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "method": "bdev_malloc_create", 00:12:05.310 "params": { 00:12:05.310 "name": "malloc0", 00:12:05.310 "num_blocks": 8192, 00:12:05.310 "block_size": 4096, 00:12:05.310 "physical_block_size": 4096, 00:12:05.310 "uuid": "39f94f1e-94f6-4b1c-8a75-751e1c55fc1b", 00:12:05.310 "optimal_io_boundary": 0 00:12:05.310 } 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "method": "bdev_wait_for_examine" 00:12:05.310 } 00:12:05.310 ] 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "subsystem": "nbd", 00:12:05.310 "config": [] 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "subsystem": "scheduler", 00:12:05.310 "config": [ 00:12:05.310 { 00:12:05.310 "method": "framework_set_scheduler", 00:12:05.310 "params": { 00:12:05.310 "name": "static" 00:12:05.310 } 00:12:05.310 } 00:12:05.310 ] 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "subsystem": "nvmf", 00:12:05.310 "config": [ 00:12:05.310 { 00:12:05.310 "method": "nvmf_set_config", 00:12:05.310 "params": { 00:12:05.310 "discovery_filter": "match_any", 00:12:05.310 "admin_cmd_passthru": { 00:12:05.310 "identify_ctrlr": false 00:12:05.310 } 00:12:05.310 } 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "method": "nvmf_set_max_subsystems", 00:12:05.310 "params": { 00:12:05.310 "max_subsystems": 1024 00:12:05.310 } 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "method": "nvmf_set_crdt", 00:12:05.310 "params": { 00:12:05.310 "crdt1": 0, 00:12:05.310 "crdt2": 0, 00:12:05.310 "crdt3": 0 00:12:05.310 } 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "method": "nvmf_create_transport", 00:12:05.310 "params": { 00:12:05.310 "trtype": "TCP", 00:12:05.310 "max_queue_depth": 128, 00:12:05.310 "max_io_qpairs_per_ctrlr": 127, 00:12:05.310 "in_capsule_data_size": 4096, 00:12:05.310 "max_io_size": 131072, 00:12:05.310 "io_unit_size": 131072, 00:12:05.310 "max_aq_depth": 128, 00:12:05.310 "num_shared_buffers": 511, 00:12:05.310 "buf_cache_size": 4294967295, 00:12:05.310 "dif_insert_or_strip": false, 00:12:05.310 "zcopy": false, 00:12:05.310 "c2h_success": false, 00:12:05.310 "sock_priority": 0, 00:12:05.310 "abort_timeout_sec": 1 00:12:05.310 } 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "method": "nvmf_create_subsystem", 00:12:05.310 "params": { 00:12:05.310 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.310 "allow_any_host": false, 00:12:05.310 "serial_number": "SPDK00000000000001", 00:12:05.310 "model_number": "SPDK bdev Controller", 00:12:05.310 "max_namespaces": 10, 00:12:05.310 "min_cntlid": 1, 00:12:05.310 "max_cntlid": 65519, 00:12:05.310 "ana_reporting": false 00:12:05.310 } 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "method": "nvmf_subsystem_add_host", 00:12:05.310 "params": { 00:12:05.310 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.310 "host": "nqn.2016-06.io.spdk:host1", 00:12:05.310 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:05.310 } 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "method": "nvmf_subsystem_add_ns", 00:12:05.310 "params": { 00:12:05.310 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.310 "namespace": { 00:12:05.310 "nsid": 1, 00:12:05.310 "bdev_name": "malloc0", 00:12:05.310 "nguid": "39F94F1E94F64B1C8A75751E1C55FC1B", 00:12:05.310 "uuid": "39f94f1e-94f6-4b1c-8a75-751e1c55fc1b" 00:12:05.310 } 00:12:05.310 } 00:12:05.310 }, 00:12:05.310 { 00:12:05.310 "method": "nvmf_subsystem_add_listener", 00:12:05.310 "params": { 00:12:05.310 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.310 "listen_address": { 00:12:05.310 "trtype": "TCP", 00:12:05.310 "adrfam": "IPv4", 00:12:05.310 "traddr": "10.0.0.2", 00:12:05.310 "trsvcid": "4420" 00:12:05.310 }, 00:12:05.310 "secure_channel": true 00:12:05.310 } 00:12:05.310 } 00:12:05.310 ] 00:12:05.310 } 00:12:05.310 ] 00:12:05.310 }' 00:12:05.310 06:37:00 -- common/autotest_common.sh@10 -- # set +x 00:12:05.569 06:37:00 -- nvmf/common.sh@469 -- # nvmfpid=77074 00:12:05.569 06:37:00 -- nvmf/common.sh@470 -- # waitforlisten 77074 00:12:05.569 06:37:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:12:05.569 06:37:00 -- common/autotest_common.sh@829 -- # '[' -z 77074 ']' 00:12:05.569 06:37:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.569 06:37:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:05.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.569 06:37:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.569 06:37:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:05.569 06:37:00 -- common/autotest_common.sh@10 -- # set +x 00:12:05.569 [2024-12-05 06:37:00.829613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:05.569 [2024-12-05 06:37:00.829717] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.569 [2024-12-05 06:37:00.968937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.569 [2024-12-05 06:37:01.003752] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:05.569 [2024-12-05 06:37:01.003923] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.569 [2024-12-05 06:37:01.003937] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.569 [2024-12-05 06:37:01.003945] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.569 [2024-12-05 06:37:01.003975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.827 [2024-12-05 06:37:01.184567] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.827 [2024-12-05 06:37:01.216460] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:05.827 [2024-12-05 06:37:01.216671] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.394 06:37:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:06.394 06:37:01 -- common/autotest_common.sh@862 -- # return 0 00:12:06.394 06:37:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:06.394 06:37:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:06.394 06:37:01 -- common/autotest_common.sh@10 -- # set +x 00:12:06.394 06:37:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:06.394 06:37:01 -- target/tls.sh@216 -- # bdevperf_pid=77106 00:12:06.394 06:37:01 -- target/tls.sh@217 -- # waitforlisten 77106 /var/tmp/bdevperf.sock 00:12:06.394 06:37:01 -- common/autotest_common.sh@829 -- # '[' -z 77106 ']' 00:12:06.394 06:37:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:06.394 06:37:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.394 06:37:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:06.394 06:37:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.394 06:37:01 -- common/autotest_common.sh@10 -- # set +x 00:12:06.394 06:37:01 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:12:06.394 06:37:01 -- target/tls.sh@213 -- # echo '{ 00:12:06.394 "subsystems": [ 00:12:06.394 { 00:12:06.394 "subsystem": "iobuf", 00:12:06.394 "config": [ 00:12:06.394 { 00:12:06.394 "method": "iobuf_set_options", 00:12:06.394 "params": { 00:12:06.394 "small_pool_count": 8192, 00:12:06.394 "large_pool_count": 1024, 00:12:06.394 "small_bufsize": 8192, 00:12:06.394 "large_bufsize": 135168 00:12:06.394 } 00:12:06.394 } 00:12:06.394 ] 00:12:06.394 }, 00:12:06.394 { 00:12:06.394 "subsystem": "sock", 00:12:06.394 "config": [ 00:12:06.394 { 00:12:06.394 "method": "sock_impl_set_options", 00:12:06.394 "params": { 00:12:06.394 "impl_name": "uring", 00:12:06.394 "recv_buf_size": 2097152, 00:12:06.394 "send_buf_size": 2097152, 00:12:06.394 "enable_recv_pipe": true, 00:12:06.394 "enable_quickack": false, 00:12:06.394 "enable_placement_id": 0, 00:12:06.394 "enable_zerocopy_send_server": false, 00:12:06.394 "enable_zerocopy_send_client": false, 00:12:06.394 "zerocopy_threshold": 0, 00:12:06.394 "tls_version": 0, 00:12:06.394 "enable_ktls": false 00:12:06.394 } 00:12:06.394 }, 00:12:06.394 { 00:12:06.394 "method": "sock_impl_set_options", 00:12:06.394 "params": { 00:12:06.394 "impl_name": "posix", 00:12:06.394 "recv_buf_size": 2097152, 00:12:06.394 "send_buf_size": 2097152, 00:12:06.394 "enable_recv_pipe": true, 00:12:06.394 "enable_quickack": false, 00:12:06.394 "enable_placement_id": 0, 00:12:06.394 "enable_zerocopy_send_server": true, 00:12:06.394 "enable_zerocopy_send_client": false, 00:12:06.394 "zerocopy_threshold": 0, 00:12:06.394 "tls_version": 0, 00:12:06.394 "enable_ktls": false 00:12:06.394 } 00:12:06.394 }, 00:12:06.394 { 00:12:06.394 "method": "sock_impl_set_options", 00:12:06.394 "params": { 00:12:06.394 "impl_name": "ssl", 00:12:06.394 "recv_buf_size": 4096, 00:12:06.394 "send_buf_size": 4096, 00:12:06.394 "enable_recv_pipe": true, 00:12:06.394 "enable_quickack": false, 00:12:06.394 "enable_placement_id": 0, 00:12:06.394 "enable_zerocopy_send_server": true, 00:12:06.394 "enable_zerocopy_send_client": false, 00:12:06.394 "zerocopy_threshold": 0, 00:12:06.394 "tls_version": 0, 00:12:06.394 "enable_ktls": false 00:12:06.394 } 00:12:06.394 } 00:12:06.394 ] 00:12:06.394 }, 00:12:06.394 { 00:12:06.394 "subsystem": "vmd", 00:12:06.394 "config": [] 00:12:06.394 }, 00:12:06.394 { 00:12:06.394 "subsystem": "accel", 00:12:06.395 "config": [ 00:12:06.395 { 00:12:06.395 "method": "accel_set_options", 00:12:06.395 "params": { 00:12:06.395 "small_cache_size": 128, 00:12:06.395 "large_cache_size": 16, 00:12:06.395 "task_count": 2048, 00:12:06.395 "sequence_count": 2048, 00:12:06.395 "buf_count": 2048 00:12:06.395 } 00:12:06.395 } 00:12:06.395 ] 00:12:06.395 }, 00:12:06.395 { 00:12:06.395 "subsystem": "bdev", 00:12:06.395 "config": [ 00:12:06.395 { 00:12:06.395 "method": "bdev_set_options", 00:12:06.395 "params": { 00:12:06.395 "bdev_io_pool_size": 65535, 00:12:06.395 "bdev_io_cache_size": 256, 00:12:06.395 "bdev_auto_examine": true, 00:12:06.395 "iobuf_small_cache_size": 128, 00:12:06.395 "iobuf_large_cache_size": 16 00:12:06.395 } 00:12:06.395 }, 00:12:06.395 { 00:12:06.395 "method": "bdev_raid_set_options", 00:12:06.395 "params": { 00:12:06.395 "process_window_size_kb": 1024 00:12:06.395 } 00:12:06.395 }, 00:12:06.395 { 00:12:06.395 "method": "bdev_iscsi_set_options", 00:12:06.395 "params": { 00:12:06.395 "timeout_sec": 30 00:12:06.395 } 00:12:06.395 }, 00:12:06.395 { 00:12:06.395 "method": "bdev_nvme_set_options", 00:12:06.395 "params": { 00:12:06.395 "action_on_timeout": "none", 00:12:06.395 "timeout_us": 0, 00:12:06.395 "timeout_admin_us": 0, 00:12:06.395 "keep_alive_timeout_ms": 10000, 00:12:06.395 "transport_retry_count": 4, 00:12:06.395 "arbitration_burst": 0, 00:12:06.395 "low_priority_weight": 0, 00:12:06.395 "medium_priority_weight": 0, 00:12:06.395 "high_priority_weight": 0, 00:12:06.395 "nvme_adminq_poll_period_us": 10000, 00:12:06.395 "nvme_ioq_poll_period_us": 0, 00:12:06.395 "io_queue_requests": 512, 00:12:06.395 "delay_cmd_submit": true, 00:12:06.395 "bdev_retry_count": 3, 00:12:06.395 "transport_ack_timeout": 0, 00:12:06.395 "ctrlr_loss_timeout_sec": 0, 00:12:06.395 "reconnect_delay_sec": 0, 00:12:06.395 "fast_io_fail_timeout_sec": 0, 00:12:06.395 "generate_uuids": false, 00:12:06.395 "transport_tos": 0, 00:12:06.395 "io_path_stat": false, 00:12:06.395 "allow_accel_sequence": false 00:12:06.395 } 00:12:06.395 }, 00:12:06.395 { 00:12:06.395 "method": "bdev_nvme_attach_controller", 00:12:06.395 "params": { 00:12:06.395 "name": "TLSTEST", 00:12:06.395 "trtype": "TCP", 00:12:06.395 "adrfam": "IPv4", 00:12:06.395 "traddr": "10.0.0.2", 00:12:06.395 "trsvcid": "4420", 00:12:06.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.395 "prchk_reftag": false, 00:12:06.395 "prchk_guard": false, 00:12:06.395 "ctrlr_loss_timeout_sec": 0, 00:12:06.395 "reconnect_delay_sec": 0, 00:12:06.395 "fast_io_fail_timeout_sec": 0, 00:12:06.395 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:06.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:06.395 "hdgst": false, 00:12:06.395 "ddgst": false 00:12:06.395 } 00:12:06.395 }, 00:12:06.395 { 00:12:06.395 "method": "bdev_nvme_set_hotplug", 00:12:06.395 "params": { 00:12:06.395 "period_us": 100000, 00:12:06.395 "enable": false 00:12:06.395 } 00:12:06.395 }, 00:12:06.395 { 00:12:06.395 "method": "bdev_wait_for_examine" 00:12:06.395 } 00:12:06.395 ] 00:12:06.395 }, 00:12:06.395 { 00:12:06.395 "subsystem": "nbd", 00:12:06.395 "config": [] 00:12:06.395 } 00:12:06.395 ] 00:12:06.395 }' 00:12:06.653 [2024-12-05 06:37:01.882276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:06.653 [2024-12-05 06:37:01.882412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77106 ] 00:12:06.653 [2024-12-05 06:37:02.015949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.653 [2024-12-05 06:37:02.051430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.910 [2024-12-05 06:37:02.175144] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:07.476 06:37:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:07.476 06:37:02 -- common/autotest_common.sh@862 -- # return 0 00:12:07.476 06:37:02 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:07.733 Running I/O for 10 seconds... 00:12:17.723 00:12:17.723 Latency(us) 00:12:17.723 [2024-12-05T06:37:13.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.723 [2024-12-05T06:37:13.189Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:17.723 Verification LBA range: start 0x0 length 0x2000 00:12:17.723 TLSTESTn1 : 10.01 6339.49 24.76 0.00 0.00 20160.46 2591.65 21686.46 00:12:17.723 [2024-12-05T06:37:13.189Z] =================================================================================================================== 00:12:17.723 [2024-12-05T06:37:13.189Z] Total : 6339.49 24.76 0.00 0.00 20160.46 2591.65 21686.46 00:12:17.723 0 00:12:17.723 06:37:13 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:17.723 06:37:13 -- target/tls.sh@223 -- # killprocess 77106 00:12:17.723 06:37:13 -- common/autotest_common.sh@936 -- # '[' -z 77106 ']' 00:12:17.723 06:37:13 -- common/autotest_common.sh@940 -- # kill -0 77106 00:12:17.723 06:37:13 -- common/autotest_common.sh@941 -- # uname 00:12:17.723 06:37:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:17.723 06:37:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77106 00:12:17.723 killing process with pid 77106 00:12:17.723 Received shutdown signal, test time was about 10.000000 seconds 00:12:17.723 00:12:17.723 Latency(us) 00:12:17.723 [2024-12-05T06:37:13.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.723 [2024-12-05T06:37:13.189Z] =================================================================================================================== 00:12:17.723 [2024-12-05T06:37:13.189Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:17.724 06:37:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:17.724 06:37:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:17.724 06:37:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77106' 00:12:17.724 06:37:13 -- common/autotest_common.sh@955 -- # kill 77106 00:12:17.724 06:37:13 -- common/autotest_common.sh@960 -- # wait 77106 00:12:17.724 06:37:13 -- target/tls.sh@224 -- # killprocess 77074 00:12:17.724 06:37:13 -- common/autotest_common.sh@936 -- # '[' -z 77074 ']' 00:12:17.724 06:37:13 -- common/autotest_common.sh@940 -- # kill -0 77074 00:12:17.724 06:37:13 -- common/autotest_common.sh@941 -- # uname 00:12:17.724 06:37:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:17.724 06:37:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77074 00:12:17.983 killing process with pid 77074 00:12:17.983 06:37:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:17.983 06:37:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:17.983 06:37:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77074' 00:12:17.983 06:37:13 -- common/autotest_common.sh@955 -- # kill 77074 00:12:17.983 06:37:13 -- common/autotest_common.sh@960 -- # wait 77074 00:12:17.983 06:37:13 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:12:17.983 06:37:13 -- target/tls.sh@227 -- # cleanup 00:12:17.983 06:37:13 -- target/tls.sh@15 -- # process_shm --id 0 00:12:17.983 06:37:13 -- common/autotest_common.sh@806 -- # type=--id 00:12:17.983 06:37:13 -- common/autotest_common.sh@807 -- # id=0 00:12:17.983 06:37:13 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:17.983 06:37:13 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:17.983 06:37:13 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:17.983 06:37:13 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:17.983 06:37:13 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:17.983 06:37:13 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:17.983 nvmf_trace.0 00:12:17.983 06:37:13 -- common/autotest_common.sh@821 -- # return 0 00:12:17.983 06:37:13 -- target/tls.sh@16 -- # killprocess 77106 00:12:17.983 06:37:13 -- common/autotest_common.sh@936 -- # '[' -z 77106 ']' 00:12:17.983 06:37:13 -- common/autotest_common.sh@940 -- # kill -0 77106 00:12:17.983 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77106) - No such process 00:12:17.983 Process with pid 77106 is not found 00:12:17.983 06:37:13 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77106 is not found' 00:12:17.983 06:37:13 -- target/tls.sh@17 -- # nvmftestfini 00:12:17.983 06:37:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:17.983 06:37:13 -- nvmf/common.sh@116 -- # sync 00:12:18.242 06:37:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:18.242 06:37:13 -- nvmf/common.sh@119 -- # set +e 00:12:18.242 06:37:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:18.242 06:37:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:18.242 rmmod nvme_tcp 00:12:18.242 rmmod nvme_fabrics 00:12:18.242 rmmod nvme_keyring 00:12:18.242 06:37:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:18.242 06:37:13 -- nvmf/common.sh@123 -- # set -e 00:12:18.242 06:37:13 -- nvmf/common.sh@124 -- # return 0 00:12:18.242 06:37:13 -- nvmf/common.sh@477 -- # '[' -n 77074 ']' 00:12:18.242 06:37:13 -- nvmf/common.sh@478 -- # killprocess 77074 00:12:18.242 06:37:13 -- common/autotest_common.sh@936 -- # '[' -z 77074 ']' 00:12:18.242 06:37:13 -- common/autotest_common.sh@940 -- # kill -0 77074 00:12:18.242 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77074) - No such process 00:12:18.242 Process with pid 77074 is not found 00:12:18.242 06:37:13 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77074 is not found' 00:12:18.242 06:37:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:18.242 06:37:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:18.242 06:37:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:18.242 06:37:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.242 06:37:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:18.242 06:37:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.242 06:37:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.242 06:37:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.242 06:37:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:18.242 06:37:13 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:18.242 00:12:18.242 real 1m7.576s 00:12:18.242 user 1m46.053s 00:12:18.242 sys 0m23.255s 00:12:18.242 06:37:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:18.242 06:37:13 -- common/autotest_common.sh@10 -- # set +x 00:12:18.242 ************************************ 00:12:18.242 END TEST nvmf_tls 00:12:18.242 ************************************ 00:12:18.242 06:37:13 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:18.242 06:37:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:18.242 06:37:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:18.242 06:37:13 -- common/autotest_common.sh@10 -- # set +x 00:12:18.242 ************************************ 00:12:18.242 START TEST nvmf_fips 00:12:18.242 ************************************ 00:12:18.242 06:37:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:18.242 * Looking for test storage... 00:12:18.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:12:18.242 06:37:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:18.242 06:37:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:18.242 06:37:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:18.501 06:37:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:18.501 06:37:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:18.501 06:37:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:18.501 06:37:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:18.501 06:37:13 -- scripts/common.sh@335 -- # IFS=.-: 00:12:18.501 06:37:13 -- scripts/common.sh@335 -- # read -ra ver1 00:12:18.501 06:37:13 -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.501 06:37:13 -- scripts/common.sh@336 -- # read -ra ver2 00:12:18.501 06:37:13 -- scripts/common.sh@337 -- # local 'op=<' 00:12:18.501 06:37:13 -- scripts/common.sh@339 -- # ver1_l=2 00:12:18.501 06:37:13 -- scripts/common.sh@340 -- # ver2_l=1 00:12:18.501 06:37:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:18.501 06:37:13 -- scripts/common.sh@343 -- # case "$op" in 00:12:18.502 06:37:13 -- scripts/common.sh@344 -- # : 1 00:12:18.502 06:37:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:18.502 06:37:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.502 06:37:13 -- scripts/common.sh@364 -- # decimal 1 00:12:18.502 06:37:13 -- scripts/common.sh@352 -- # local d=1 00:12:18.502 06:37:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.502 06:37:13 -- scripts/common.sh@354 -- # echo 1 00:12:18.502 06:37:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:18.502 06:37:13 -- scripts/common.sh@365 -- # decimal 2 00:12:18.502 06:37:13 -- scripts/common.sh@352 -- # local d=2 00:12:18.502 06:37:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.502 06:37:13 -- scripts/common.sh@354 -- # echo 2 00:12:18.502 06:37:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:18.502 06:37:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:18.502 06:37:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:18.502 06:37:13 -- scripts/common.sh@367 -- # return 0 00:12:18.502 06:37:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.502 06:37:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:18.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.502 --rc genhtml_branch_coverage=1 00:12:18.502 --rc genhtml_function_coverage=1 00:12:18.502 --rc genhtml_legend=1 00:12:18.502 --rc geninfo_all_blocks=1 00:12:18.502 --rc geninfo_unexecuted_blocks=1 00:12:18.502 00:12:18.502 ' 00:12:18.502 06:37:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:18.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.502 --rc genhtml_branch_coverage=1 00:12:18.502 --rc genhtml_function_coverage=1 00:12:18.502 --rc genhtml_legend=1 00:12:18.502 --rc geninfo_all_blocks=1 00:12:18.502 --rc geninfo_unexecuted_blocks=1 00:12:18.502 00:12:18.502 ' 00:12:18.502 06:37:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:18.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.502 --rc genhtml_branch_coverage=1 00:12:18.502 --rc genhtml_function_coverage=1 00:12:18.502 --rc genhtml_legend=1 00:12:18.502 --rc geninfo_all_blocks=1 00:12:18.502 --rc geninfo_unexecuted_blocks=1 00:12:18.502 00:12:18.502 ' 00:12:18.502 06:37:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:18.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.502 --rc genhtml_branch_coverage=1 00:12:18.502 --rc genhtml_function_coverage=1 00:12:18.502 --rc genhtml_legend=1 00:12:18.502 --rc geninfo_all_blocks=1 00:12:18.502 --rc geninfo_unexecuted_blocks=1 00:12:18.502 00:12:18.502 ' 00:12:18.502 06:37:13 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:18.502 06:37:13 -- nvmf/common.sh@7 -- # uname -s 00:12:18.502 06:37:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.502 06:37:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.502 06:37:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.502 06:37:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.502 06:37:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.502 06:37:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.502 06:37:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.502 06:37:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.502 06:37:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.502 06:37:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.502 06:37:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:12:18.502 06:37:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:12:18.502 06:37:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.502 06:37:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.502 06:37:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:18.502 06:37:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:18.502 06:37:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.502 06:37:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.502 06:37:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.502 06:37:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.502 06:37:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.502 06:37:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.502 06:37:13 -- paths/export.sh@5 -- # export PATH 00:12:18.502 06:37:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.502 06:37:13 -- nvmf/common.sh@46 -- # : 0 00:12:18.502 06:37:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:18.502 06:37:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:18.502 06:37:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:18.502 06:37:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.502 06:37:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.502 06:37:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:18.502 06:37:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:18.502 06:37:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:18.502 06:37:13 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:18.502 06:37:13 -- fips/fips.sh@89 -- # check_openssl_version 00:12:18.502 06:37:13 -- fips/fips.sh@83 -- # local target=3.0.0 00:12:18.502 06:37:13 -- fips/fips.sh@85 -- # openssl version 00:12:18.502 06:37:13 -- fips/fips.sh@85 -- # awk '{print $2}' 00:12:18.502 06:37:13 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:12:18.502 06:37:13 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:12:18.502 06:37:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:18.502 06:37:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:18.502 06:37:13 -- scripts/common.sh@335 -- # IFS=.-: 00:12:18.502 06:37:13 -- scripts/common.sh@335 -- # read -ra ver1 00:12:18.502 06:37:13 -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.502 06:37:13 -- scripts/common.sh@336 -- # read -ra ver2 00:12:18.502 06:37:13 -- scripts/common.sh@337 -- # local 'op=>=' 00:12:18.502 06:37:13 -- scripts/common.sh@339 -- # ver1_l=3 00:12:18.503 06:37:13 -- scripts/common.sh@340 -- # ver2_l=3 00:12:18.503 06:37:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:18.503 06:37:13 -- scripts/common.sh@343 -- # case "$op" in 00:12:18.503 06:37:13 -- scripts/common.sh@347 -- # : 1 00:12:18.503 06:37:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:18.503 06:37:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.503 06:37:13 -- scripts/common.sh@364 -- # decimal 3 00:12:18.503 06:37:13 -- scripts/common.sh@352 -- # local d=3 00:12:18.503 06:37:13 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:18.503 06:37:13 -- scripts/common.sh@354 -- # echo 3 00:12:18.503 06:37:13 -- scripts/common.sh@364 -- # ver1[v]=3 00:12:18.503 06:37:13 -- scripts/common.sh@365 -- # decimal 3 00:12:18.503 06:37:13 -- scripts/common.sh@352 -- # local d=3 00:12:18.503 06:37:13 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:18.503 06:37:13 -- scripts/common.sh@354 -- # echo 3 00:12:18.503 06:37:13 -- scripts/common.sh@365 -- # ver2[v]=3 00:12:18.503 06:37:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:18.503 06:37:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:18.503 06:37:13 -- scripts/common.sh@363 -- # (( v++ )) 00:12:18.503 06:37:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.503 06:37:13 -- scripts/common.sh@364 -- # decimal 1 00:12:18.503 06:37:13 -- scripts/common.sh@352 -- # local d=1 00:12:18.503 06:37:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.503 06:37:13 -- scripts/common.sh@354 -- # echo 1 00:12:18.503 06:37:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:18.503 06:37:13 -- scripts/common.sh@365 -- # decimal 0 00:12:18.503 06:37:13 -- scripts/common.sh@352 -- # local d=0 00:12:18.503 06:37:13 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:18.503 06:37:13 -- scripts/common.sh@354 -- # echo 0 00:12:18.503 06:37:13 -- scripts/common.sh@365 -- # ver2[v]=0 00:12:18.503 06:37:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:18.503 06:37:13 -- scripts/common.sh@366 -- # return 0 00:12:18.503 06:37:13 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:12:18.503 06:37:13 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:12:18.503 06:37:13 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:12:18.503 06:37:13 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:12:18.503 06:37:13 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:12:18.503 06:37:13 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:12:18.503 06:37:13 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:12:18.503 06:37:13 -- fips/fips.sh@113 -- # build_openssl_config 00:12:18.503 06:37:13 -- fips/fips.sh@37 -- # cat 00:12:18.503 06:37:13 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:12:18.503 06:37:13 -- fips/fips.sh@58 -- # cat - 00:12:18.503 06:37:13 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:12:18.503 06:37:13 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:12:18.503 06:37:13 -- fips/fips.sh@116 -- # mapfile -t providers 00:12:18.503 06:37:13 -- fips/fips.sh@116 -- # openssl list -providers 00:12:18.503 06:37:13 -- fips/fips.sh@116 -- # grep name 00:12:18.503 06:37:13 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:12:18.503 06:37:13 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:12:18.503 06:37:13 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:12:18.503 06:37:13 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:12:18.503 06:37:13 -- fips/fips.sh@127 -- # : 00:12:18.503 06:37:13 -- common/autotest_common.sh@650 -- # local es=0 00:12:18.503 06:37:13 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:12:18.503 06:37:13 -- common/autotest_common.sh@638 -- # local arg=openssl 00:12:18.503 06:37:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.503 06:37:13 -- common/autotest_common.sh@642 -- # type -t openssl 00:12:18.503 06:37:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.503 06:37:13 -- common/autotest_common.sh@644 -- # type -P openssl 00:12:18.503 06:37:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.503 06:37:13 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:12:18.503 06:37:13 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:12:18.503 06:37:13 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:12:18.503 Error setting digest 00:12:18.503 4012076EDE7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:12:18.503 4012076EDE7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:12:18.503 06:37:13 -- common/autotest_common.sh@653 -- # es=1 00:12:18.503 06:37:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:18.503 06:37:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:18.503 06:37:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:18.503 06:37:13 -- fips/fips.sh@130 -- # nvmftestinit 00:12:18.503 06:37:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:18.503 06:37:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.503 06:37:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:18.503 06:37:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:18.503 06:37:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:18.503 06:37:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.762 06:37:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.763 06:37:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.763 06:37:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:18.763 06:37:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:18.763 06:37:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:18.763 06:37:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:18.763 06:37:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:18.763 06:37:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:18.763 06:37:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.763 06:37:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.763 06:37:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:18.763 06:37:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:18.763 06:37:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:18.763 06:37:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:18.763 06:37:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:18.763 06:37:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.763 06:37:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:18.763 06:37:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:18.763 06:37:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:18.763 06:37:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:18.763 06:37:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:18.763 06:37:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:18.763 Cannot find device "nvmf_tgt_br" 00:12:18.763 06:37:14 -- nvmf/common.sh@154 -- # true 00:12:18.763 06:37:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:18.763 Cannot find device "nvmf_tgt_br2" 00:12:18.763 06:37:14 -- nvmf/common.sh@155 -- # true 00:12:18.763 06:37:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:18.763 06:37:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:18.763 Cannot find device "nvmf_tgt_br" 00:12:18.763 06:37:14 -- nvmf/common.sh@157 -- # true 00:12:18.763 06:37:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:18.763 Cannot find device "nvmf_tgt_br2" 00:12:18.763 06:37:14 -- nvmf/common.sh@158 -- # true 00:12:18.763 06:37:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:18.763 06:37:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:18.763 06:37:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:18.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.763 06:37:14 -- nvmf/common.sh@161 -- # true 00:12:18.763 06:37:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:18.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.763 06:37:14 -- nvmf/common.sh@162 -- # true 00:12:18.763 06:37:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:18.763 06:37:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:18.763 06:37:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:18.763 06:37:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:18.763 06:37:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:18.763 06:37:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:18.763 06:37:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:18.763 06:37:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:18.763 06:37:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:18.763 06:37:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:18.763 06:37:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:18.763 06:37:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:18.763 06:37:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:18.763 06:37:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:18.763 06:37:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:19.024 06:37:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:19.024 06:37:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:19.024 06:37:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:19.024 06:37:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:19.024 06:37:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:19.024 06:37:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:19.024 06:37:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:19.024 06:37:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:19.024 06:37:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:19.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:12:19.024 00:12:19.024 --- 10.0.0.2 ping statistics --- 00:12:19.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.024 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:12:19.024 06:37:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:19.024 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:19.024 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:12:19.024 00:12:19.024 --- 10.0.0.3 ping statistics --- 00:12:19.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.024 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:19.024 06:37:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:19.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:19.024 00:12:19.024 --- 10.0.0.1 ping statistics --- 00:12:19.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.024 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:19.024 06:37:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.024 06:37:14 -- nvmf/common.sh@421 -- # return 0 00:12:19.024 06:37:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:19.024 06:37:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.024 06:37:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:19.024 06:37:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:19.024 06:37:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.024 06:37:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:19.024 06:37:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:19.024 06:37:14 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:12:19.024 06:37:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:19.024 06:37:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:19.024 06:37:14 -- common/autotest_common.sh@10 -- # set +x 00:12:19.024 06:37:14 -- nvmf/common.sh@469 -- # nvmfpid=77462 00:12:19.024 06:37:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:19.024 06:37:14 -- nvmf/common.sh@470 -- # waitforlisten 77462 00:12:19.024 06:37:14 -- common/autotest_common.sh@829 -- # '[' -z 77462 ']' 00:12:19.024 06:37:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.024 06:37:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.024 06:37:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.024 06:37:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.024 06:37:14 -- common/autotest_common.sh@10 -- # set +x 00:12:19.024 [2024-12-05 06:37:14.415081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:19.024 [2024-12-05 06:37:14.415208] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.283 [2024-12-05 06:37:14.558672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.283 [2024-12-05 06:37:14.601866] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:19.283 [2024-12-05 06:37:14.602137] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.283 [2024-12-05 06:37:14.602160] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.283 [2024-12-05 06:37:14.602175] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.283 [2024-12-05 06:37:14.602211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.219 06:37:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:20.219 06:37:15 -- common/autotest_common.sh@862 -- # return 0 00:12:20.219 06:37:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:20.219 06:37:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:20.219 06:37:15 -- common/autotest_common.sh@10 -- # set +x 00:12:20.219 06:37:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.219 06:37:15 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:12:20.219 06:37:15 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:20.219 06:37:15 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:20.219 06:37:15 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:20.219 06:37:15 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:20.219 06:37:15 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:20.219 06:37:15 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:20.219 06:37:15 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:20.477 [2024-12-05 06:37:15.732170] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.477 [2024-12-05 06:37:15.748125] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:20.477 [2024-12-05 06:37:15.748399] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.477 malloc0 00:12:20.477 06:37:15 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:20.478 06:37:15 -- fips/fips.sh@147 -- # bdevperf_pid=77500 00:12:20.478 06:37:15 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:20.478 06:37:15 -- fips/fips.sh@148 -- # waitforlisten 77500 /var/tmp/bdevperf.sock 00:12:20.478 06:37:15 -- common/autotest_common.sh@829 -- # '[' -z 77500 ']' 00:12:20.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:20.478 06:37:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:20.478 06:37:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:20.478 06:37:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:20.478 06:37:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:20.478 06:37:15 -- common/autotest_common.sh@10 -- # set +x 00:12:20.478 [2024-12-05 06:37:15.870772] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:20.478 [2024-12-05 06:37:15.870873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77500 ] 00:12:20.736 [2024-12-05 06:37:16.011204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.736 [2024-12-05 06:37:16.050000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.673 06:37:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.673 06:37:16 -- common/autotest_common.sh@862 -- # return 0 00:12:21.673 06:37:16 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:21.673 [2024-12-05 06:37:17.062790] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:21.673 TLSTESTn1 00:12:21.932 06:37:17 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:21.932 Running I/O for 10 seconds... 00:12:31.907 00:12:31.907 Latency(us) 00:12:31.907 [2024-12-05T06:37:27.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.907 [2024-12-05T06:37:27.373Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:31.907 Verification LBA range: start 0x0 length 0x2000 00:12:31.907 TLSTESTn1 : 10.02 5211.05 20.36 0.00 0.00 24519.13 4289.63 19779.96 00:12:31.907 [2024-12-05T06:37:27.373Z] =================================================================================================================== 00:12:31.907 [2024-12-05T06:37:27.373Z] Total : 5211.05 20.36 0.00 0.00 24519.13 4289.63 19779.96 00:12:31.907 0 00:12:31.907 06:37:27 -- fips/fips.sh@1 -- # cleanup 00:12:31.907 06:37:27 -- fips/fips.sh@15 -- # process_shm --id 0 00:12:31.907 06:37:27 -- common/autotest_common.sh@806 -- # type=--id 00:12:31.907 06:37:27 -- common/autotest_common.sh@807 -- # id=0 00:12:31.907 06:37:27 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:31.907 06:37:27 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:31.907 06:37:27 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:31.907 06:37:27 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:31.907 06:37:27 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:31.907 06:37:27 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:31.907 nvmf_trace.0 00:12:32.166 06:37:27 -- common/autotest_common.sh@821 -- # return 0 00:12:32.166 06:37:27 -- fips/fips.sh@16 -- # killprocess 77500 00:12:32.166 06:37:27 -- common/autotest_common.sh@936 -- # '[' -z 77500 ']' 00:12:32.166 06:37:27 -- common/autotest_common.sh@940 -- # kill -0 77500 00:12:32.166 06:37:27 -- common/autotest_common.sh@941 -- # uname 00:12:32.166 06:37:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:32.166 06:37:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77500 00:12:32.166 06:37:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:32.166 06:37:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:32.166 killing process with pid 77500 00:12:32.166 06:37:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77500' 00:12:32.166 06:37:27 -- common/autotest_common.sh@955 -- # kill 77500 00:12:32.166 Received shutdown signal, test time was about 10.000000 seconds 00:12:32.166 00:12:32.166 Latency(us) 00:12:32.166 [2024-12-05T06:37:27.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.166 [2024-12-05T06:37:27.632Z] =================================================================================================================== 00:12:32.166 [2024-12-05T06:37:27.632Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:32.166 06:37:27 -- common/autotest_common.sh@960 -- # wait 77500 00:12:32.166 06:37:27 -- fips/fips.sh@17 -- # nvmftestfini 00:12:32.166 06:37:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:32.166 06:37:27 -- nvmf/common.sh@116 -- # sync 00:12:32.166 06:37:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:32.166 06:37:27 -- nvmf/common.sh@119 -- # set +e 00:12:32.166 06:37:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:32.166 06:37:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:32.166 rmmod nvme_tcp 00:12:32.424 rmmod nvme_fabrics 00:12:32.424 rmmod nvme_keyring 00:12:32.424 06:37:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:32.424 06:37:27 -- nvmf/common.sh@123 -- # set -e 00:12:32.424 06:37:27 -- nvmf/common.sh@124 -- # return 0 00:12:32.424 06:37:27 -- nvmf/common.sh@477 -- # '[' -n 77462 ']' 00:12:32.424 06:37:27 -- nvmf/common.sh@478 -- # killprocess 77462 00:12:32.424 06:37:27 -- common/autotest_common.sh@936 -- # '[' -z 77462 ']' 00:12:32.425 06:37:27 -- common/autotest_common.sh@940 -- # kill -0 77462 00:12:32.425 06:37:27 -- common/autotest_common.sh@941 -- # uname 00:12:32.425 06:37:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:32.425 06:37:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77462 00:12:32.425 06:37:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:32.425 06:37:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:32.425 killing process with pid 77462 00:12:32.425 06:37:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77462' 00:12:32.425 06:37:27 -- common/autotest_common.sh@955 -- # kill 77462 00:12:32.425 06:37:27 -- common/autotest_common.sh@960 -- # wait 77462 00:12:32.425 06:37:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:32.425 06:37:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:32.425 06:37:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:32.425 06:37:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:32.425 06:37:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:32.425 06:37:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.425 06:37:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.425 06:37:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.683 06:37:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:32.683 06:37:27 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:32.683 00:12:32.683 real 0m14.265s 00:12:32.683 user 0m19.522s 00:12:32.683 sys 0m5.689s 00:12:32.683 06:37:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:32.683 06:37:27 -- common/autotest_common.sh@10 -- # set +x 00:12:32.683 ************************************ 00:12:32.683 END TEST nvmf_fips 00:12:32.683 ************************************ 00:12:32.683 06:37:27 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:12:32.683 06:37:27 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:32.683 06:37:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:32.683 06:37:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:32.683 06:37:27 -- common/autotest_common.sh@10 -- # set +x 00:12:32.683 ************************************ 00:12:32.683 START TEST nvmf_fuzz 00:12:32.683 ************************************ 00:12:32.683 06:37:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:32.683 * Looking for test storage... 00:12:32.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:32.683 06:37:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:32.683 06:37:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:32.683 06:37:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:32.683 06:37:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:32.683 06:37:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:32.683 06:37:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:32.683 06:37:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:32.683 06:37:28 -- scripts/common.sh@335 -- # IFS=.-: 00:12:32.683 06:37:28 -- scripts/common.sh@335 -- # read -ra ver1 00:12:32.683 06:37:28 -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.683 06:37:28 -- scripts/common.sh@336 -- # read -ra ver2 00:12:32.683 06:37:28 -- scripts/common.sh@337 -- # local 'op=<' 00:12:32.683 06:37:28 -- scripts/common.sh@339 -- # ver1_l=2 00:12:32.683 06:37:28 -- scripts/common.sh@340 -- # ver2_l=1 00:12:32.683 06:37:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:32.683 06:37:28 -- scripts/common.sh@343 -- # case "$op" in 00:12:32.684 06:37:28 -- scripts/common.sh@344 -- # : 1 00:12:32.684 06:37:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:32.684 06:37:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.684 06:37:28 -- scripts/common.sh@364 -- # decimal 1 00:12:32.684 06:37:28 -- scripts/common.sh@352 -- # local d=1 00:12:32.684 06:37:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.684 06:37:28 -- scripts/common.sh@354 -- # echo 1 00:12:32.684 06:37:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:32.684 06:37:28 -- scripts/common.sh@365 -- # decimal 2 00:12:32.684 06:37:28 -- scripts/common.sh@352 -- # local d=2 00:12:32.684 06:37:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.684 06:37:28 -- scripts/common.sh@354 -- # echo 2 00:12:32.684 06:37:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:32.684 06:37:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:32.684 06:37:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:32.684 06:37:28 -- scripts/common.sh@367 -- # return 0 00:12:32.684 06:37:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.684 06:37:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:32.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.684 --rc genhtml_branch_coverage=1 00:12:32.684 --rc genhtml_function_coverage=1 00:12:32.684 --rc genhtml_legend=1 00:12:32.684 --rc geninfo_all_blocks=1 00:12:32.684 --rc geninfo_unexecuted_blocks=1 00:12:32.684 00:12:32.684 ' 00:12:32.684 06:37:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:32.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.684 --rc genhtml_branch_coverage=1 00:12:32.684 --rc genhtml_function_coverage=1 00:12:32.684 --rc genhtml_legend=1 00:12:32.684 --rc geninfo_all_blocks=1 00:12:32.684 --rc geninfo_unexecuted_blocks=1 00:12:32.684 00:12:32.684 ' 00:12:32.684 06:37:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:32.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.684 --rc genhtml_branch_coverage=1 00:12:32.684 --rc genhtml_function_coverage=1 00:12:32.684 --rc genhtml_legend=1 00:12:32.684 --rc geninfo_all_blocks=1 00:12:32.684 --rc geninfo_unexecuted_blocks=1 00:12:32.684 00:12:32.684 ' 00:12:32.684 06:37:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:32.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.684 --rc genhtml_branch_coverage=1 00:12:32.684 --rc genhtml_function_coverage=1 00:12:32.684 --rc genhtml_legend=1 00:12:32.684 --rc geninfo_all_blocks=1 00:12:32.684 --rc geninfo_unexecuted_blocks=1 00:12:32.684 00:12:32.684 ' 00:12:32.684 06:37:28 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:32.684 06:37:28 -- nvmf/common.sh@7 -- # uname -s 00:12:32.684 06:37:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.684 06:37:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.684 06:37:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.684 06:37:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.684 06:37:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.684 06:37:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.684 06:37:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.684 06:37:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.684 06:37:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.684 06:37:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.942 06:37:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:12:32.942 06:37:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:12:32.943 06:37:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.943 06:37:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.943 06:37:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:32.943 06:37:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:32.943 06:37:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.943 06:37:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.943 06:37:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.943 06:37:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.943 06:37:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.943 06:37:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.943 06:37:28 -- paths/export.sh@5 -- # export PATH 00:12:32.943 06:37:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.943 06:37:28 -- nvmf/common.sh@46 -- # : 0 00:12:32.943 06:37:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:32.943 06:37:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:32.943 06:37:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:32.943 06:37:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.943 06:37:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.943 06:37:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:32.943 06:37:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:32.943 06:37:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:32.943 06:37:28 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:12:32.943 06:37:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:32.943 06:37:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.943 06:37:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:32.943 06:37:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:32.943 06:37:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:32.943 06:37:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.943 06:37:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.943 06:37:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.943 06:37:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:32.943 06:37:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:32.943 06:37:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:32.943 06:37:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:32.943 06:37:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:32.943 06:37:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:32.943 06:37:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.943 06:37:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.943 06:37:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:32.943 06:37:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:32.943 06:37:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:32.943 06:37:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:32.943 06:37:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:32.943 06:37:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.943 06:37:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:32.943 06:37:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:32.943 06:37:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:32.943 06:37:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:32.943 06:37:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:32.943 06:37:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:32.943 Cannot find device "nvmf_tgt_br" 00:12:32.943 06:37:28 -- nvmf/common.sh@154 -- # true 00:12:32.943 06:37:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:32.943 Cannot find device "nvmf_tgt_br2" 00:12:32.943 06:37:28 -- nvmf/common.sh@155 -- # true 00:12:32.943 06:37:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:32.943 06:37:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:32.943 Cannot find device "nvmf_tgt_br" 00:12:32.943 06:37:28 -- nvmf/common.sh@157 -- # true 00:12:32.943 06:37:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:32.943 Cannot find device "nvmf_tgt_br2" 00:12:32.943 06:37:28 -- nvmf/common.sh@158 -- # true 00:12:32.943 06:37:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:32.943 06:37:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:32.943 06:37:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:32.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.943 06:37:28 -- nvmf/common.sh@161 -- # true 00:12:32.943 06:37:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:32.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.943 06:37:28 -- nvmf/common.sh@162 -- # true 00:12:32.943 06:37:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:32.943 06:37:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:32.943 06:37:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:32.943 06:37:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:32.943 06:37:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:32.943 06:37:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:32.943 06:37:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:32.943 06:37:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:32.943 06:37:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:32.943 06:37:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:32.943 06:37:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:32.943 06:37:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:32.943 06:37:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:33.202 06:37:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:33.202 06:37:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:33.202 06:37:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:33.202 06:37:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:33.202 06:37:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:33.202 06:37:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:33.202 06:37:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:33.202 06:37:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:33.202 06:37:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:33.202 06:37:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:33.202 06:37:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:33.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:12:33.202 00:12:33.202 --- 10.0.0.2 ping statistics --- 00:12:33.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.202 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:12:33.202 06:37:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:33.202 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:33.202 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:12:33.202 00:12:33.202 --- 10.0.0.3 ping statistics --- 00:12:33.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.202 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:33.202 06:37:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:33.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:33.202 00:12:33.202 --- 10.0.0.1 ping statistics --- 00:12:33.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.202 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:33.202 06:37:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.202 06:37:28 -- nvmf/common.sh@421 -- # return 0 00:12:33.202 06:37:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:33.202 06:37:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.202 06:37:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:33.202 06:37:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:33.202 06:37:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.202 06:37:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:33.202 06:37:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:33.202 06:37:28 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=77827 00:12:33.202 06:37:28 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:33.202 06:37:28 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:33.202 06:37:28 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 77827 00:12:33.202 06:37:28 -- common/autotest_common.sh@829 -- # '[' -z 77827 ']' 00:12:33.202 06:37:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.202 06:37:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:33.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.202 06:37:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.202 06:37:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:33.202 06:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:33.461 06:37:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.461 06:37:28 -- common/autotest_common.sh@862 -- # return 0 00:12:33.461 06:37:28 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.461 06:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.461 06:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:33.461 06:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.461 06:37:28 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:12:33.461 06:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.461 06:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:33.461 Malloc0 00:12:33.461 06:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.461 06:37:28 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:33.461 06:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.461 06:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:33.461 06:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.461 06:37:28 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:33.461 06:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.461 06:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:33.461 06:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.461 06:37:28 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.461 06:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.461 06:37:28 -- common/autotest_common.sh@10 -- # set +x 00:12:33.461 06:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.461 06:37:28 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:12:33.461 06:37:28 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:12:33.720 Shutting down the fuzz application 00:12:33.720 06:37:29 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:12:33.977 Shutting down the fuzz application 00:12:33.977 06:37:29 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.977 06:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.977 06:37:29 -- common/autotest_common.sh@10 -- # set +x 00:12:33.977 06:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.977 06:37:29 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:33.977 06:37:29 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:12:33.977 06:37:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:33.977 06:37:29 -- nvmf/common.sh@116 -- # sync 00:12:33.977 06:37:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:33.977 06:37:29 -- nvmf/common.sh@119 -- # set +e 00:12:33.977 06:37:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:33.977 06:37:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:34.235 rmmod nvme_tcp 00:12:34.235 rmmod nvme_fabrics 00:12:34.235 rmmod nvme_keyring 00:12:34.235 06:37:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:34.235 06:37:29 -- nvmf/common.sh@123 -- # set -e 00:12:34.235 06:37:29 -- nvmf/common.sh@124 -- # return 0 00:12:34.235 06:37:29 -- nvmf/common.sh@477 -- # '[' -n 77827 ']' 00:12:34.235 06:37:29 -- nvmf/common.sh@478 -- # killprocess 77827 00:12:34.235 06:37:29 -- common/autotest_common.sh@936 -- # '[' -z 77827 ']' 00:12:34.235 06:37:29 -- common/autotest_common.sh@940 -- # kill -0 77827 00:12:34.235 06:37:29 -- common/autotest_common.sh@941 -- # uname 00:12:34.235 06:37:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:34.235 06:37:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77827 00:12:34.235 06:37:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:34.235 06:37:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:34.235 killing process with pid 77827 00:12:34.235 06:37:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77827' 00:12:34.235 06:37:29 -- common/autotest_common.sh@955 -- # kill 77827 00:12:34.235 06:37:29 -- common/autotest_common.sh@960 -- # wait 77827 00:12:34.235 06:37:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:34.235 06:37:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:34.235 06:37:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:34.235 06:37:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.235 06:37:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:34.235 06:37:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.235 06:37:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.235 06:37:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.493 06:37:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:34.493 06:37:29 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:12:34.493 00:12:34.493 real 0m1.778s 00:12:34.493 user 0m1.605s 00:12:34.493 sys 0m0.554s 00:12:34.493 06:37:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:34.493 ************************************ 00:12:34.493 END TEST nvmf_fuzz 00:12:34.493 ************************************ 00:12:34.493 06:37:29 -- common/autotest_common.sh@10 -- # set +x 00:12:34.493 06:37:29 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:34.493 06:37:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:34.493 06:37:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:34.493 06:37:29 -- common/autotest_common.sh@10 -- # set +x 00:12:34.493 ************************************ 00:12:34.493 START TEST nvmf_multiconnection 00:12:34.493 ************************************ 00:12:34.493 06:37:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:34.493 * Looking for test storage... 00:12:34.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:34.493 06:37:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:34.493 06:37:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:34.493 06:37:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:34.493 06:37:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:34.493 06:37:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:34.493 06:37:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:34.493 06:37:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:34.493 06:37:29 -- scripts/common.sh@335 -- # IFS=.-: 00:12:34.493 06:37:29 -- scripts/common.sh@335 -- # read -ra ver1 00:12:34.493 06:37:29 -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.493 06:37:29 -- scripts/common.sh@336 -- # read -ra ver2 00:12:34.493 06:37:29 -- scripts/common.sh@337 -- # local 'op=<' 00:12:34.493 06:37:29 -- scripts/common.sh@339 -- # ver1_l=2 00:12:34.493 06:37:29 -- scripts/common.sh@340 -- # ver2_l=1 00:12:34.493 06:37:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:34.493 06:37:29 -- scripts/common.sh@343 -- # case "$op" in 00:12:34.493 06:37:29 -- scripts/common.sh@344 -- # : 1 00:12:34.493 06:37:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:34.493 06:37:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.493 06:37:29 -- scripts/common.sh@364 -- # decimal 1 00:12:34.493 06:37:29 -- scripts/common.sh@352 -- # local d=1 00:12:34.493 06:37:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.493 06:37:29 -- scripts/common.sh@354 -- # echo 1 00:12:34.493 06:37:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:34.493 06:37:29 -- scripts/common.sh@365 -- # decimal 2 00:12:34.493 06:37:29 -- scripts/common.sh@352 -- # local d=2 00:12:34.493 06:37:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.493 06:37:29 -- scripts/common.sh@354 -- # echo 2 00:12:34.493 06:37:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:34.493 06:37:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:34.493 06:37:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:34.493 06:37:29 -- scripts/common.sh@367 -- # return 0 00:12:34.493 06:37:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.493 06:37:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:34.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.493 --rc genhtml_branch_coverage=1 00:12:34.493 --rc genhtml_function_coverage=1 00:12:34.493 --rc genhtml_legend=1 00:12:34.493 --rc geninfo_all_blocks=1 00:12:34.493 --rc geninfo_unexecuted_blocks=1 00:12:34.493 00:12:34.493 ' 00:12:34.493 06:37:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:34.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.493 --rc genhtml_branch_coverage=1 00:12:34.493 --rc genhtml_function_coverage=1 00:12:34.493 --rc genhtml_legend=1 00:12:34.493 --rc geninfo_all_blocks=1 00:12:34.493 --rc geninfo_unexecuted_blocks=1 00:12:34.493 00:12:34.493 ' 00:12:34.493 06:37:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:34.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.493 --rc genhtml_branch_coverage=1 00:12:34.493 --rc genhtml_function_coverage=1 00:12:34.493 --rc genhtml_legend=1 00:12:34.493 --rc geninfo_all_blocks=1 00:12:34.493 --rc geninfo_unexecuted_blocks=1 00:12:34.493 00:12:34.493 ' 00:12:34.493 06:37:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:34.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.493 --rc genhtml_branch_coverage=1 00:12:34.493 --rc genhtml_function_coverage=1 00:12:34.493 --rc genhtml_legend=1 00:12:34.493 --rc geninfo_all_blocks=1 00:12:34.493 --rc geninfo_unexecuted_blocks=1 00:12:34.493 00:12:34.493 ' 00:12:34.493 06:37:29 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:34.751 06:37:29 -- nvmf/common.sh@7 -- # uname -s 00:12:34.751 06:37:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.751 06:37:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.751 06:37:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.751 06:37:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.751 06:37:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.751 06:37:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.751 06:37:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.751 06:37:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.751 06:37:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.751 06:37:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.751 06:37:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:12:34.751 06:37:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:12:34.751 06:37:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.751 06:37:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.751 06:37:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:34.751 06:37:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:34.751 06:37:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.751 06:37:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.751 06:37:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.751 06:37:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.751 06:37:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.751 06:37:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.751 06:37:29 -- paths/export.sh@5 -- # export PATH 00:12:34.751 06:37:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.751 06:37:29 -- nvmf/common.sh@46 -- # : 0 00:12:34.751 06:37:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:34.751 06:37:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:34.751 06:37:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:34.751 06:37:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.751 06:37:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.751 06:37:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:34.751 06:37:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:34.751 06:37:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:34.751 06:37:29 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:34.751 06:37:29 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:34.751 06:37:29 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:12:34.751 06:37:29 -- target/multiconnection.sh@16 -- # nvmftestinit 00:12:34.751 06:37:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:34.751 06:37:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.751 06:37:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:34.751 06:37:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:34.751 06:37:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:34.751 06:37:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.751 06:37:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.751 06:37:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.751 06:37:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:34.751 06:37:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:34.751 06:37:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:34.751 06:37:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:34.751 06:37:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:34.752 06:37:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:34.752 06:37:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.752 06:37:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.752 06:37:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:34.752 06:37:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:34.752 06:37:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:34.752 06:37:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:34.752 06:37:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:34.752 06:37:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.752 06:37:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:34.752 06:37:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:34.752 06:37:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:34.752 06:37:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:34.752 06:37:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:34.752 06:37:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:34.752 Cannot find device "nvmf_tgt_br" 00:12:34.752 06:37:30 -- nvmf/common.sh@154 -- # true 00:12:34.752 06:37:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:34.752 Cannot find device "nvmf_tgt_br2" 00:12:34.752 06:37:30 -- nvmf/common.sh@155 -- # true 00:12:34.752 06:37:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:34.752 06:37:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:34.752 Cannot find device "nvmf_tgt_br" 00:12:34.752 06:37:30 -- nvmf/common.sh@157 -- # true 00:12:34.752 06:37:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:34.752 Cannot find device "nvmf_tgt_br2" 00:12:34.752 06:37:30 -- nvmf/common.sh@158 -- # true 00:12:34.752 06:37:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:34.752 06:37:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:34.752 06:37:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:34.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.752 06:37:30 -- nvmf/common.sh@161 -- # true 00:12:34.752 06:37:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:34.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.752 06:37:30 -- nvmf/common.sh@162 -- # true 00:12:34.752 06:37:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:34.752 06:37:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:34.752 06:37:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:34.752 06:37:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:34.752 06:37:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:34.752 06:37:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:35.010 06:37:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:35.010 06:37:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:35.010 06:37:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:35.010 06:37:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:35.010 06:37:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:35.010 06:37:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:35.010 06:37:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:35.010 06:37:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:35.010 06:37:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:35.010 06:37:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:35.010 06:37:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:35.010 06:37:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:35.010 06:37:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:35.010 06:37:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:35.010 06:37:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:35.010 06:37:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:35.010 06:37:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:35.010 06:37:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:35.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:12:35.010 00:12:35.010 --- 10.0.0.2 ping statistics --- 00:12:35.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.010 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:35.010 06:37:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:35.010 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:35.010 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.028 ms 00:12:35.010 00:12:35.010 --- 10.0.0.3 ping statistics --- 00:12:35.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.010 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:35.010 06:37:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:35.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:35.010 00:12:35.010 --- 10.0.0.1 ping statistics --- 00:12:35.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.010 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:35.010 06:37:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.010 06:37:30 -- nvmf/common.sh@421 -- # return 0 00:12:35.010 06:37:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:35.010 06:37:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.010 06:37:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:35.010 06:37:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:35.010 06:37:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.010 06:37:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:35.010 06:37:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:35.010 06:37:30 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:12:35.010 06:37:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:35.010 06:37:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.010 06:37:30 -- common/autotest_common.sh@10 -- # set +x 00:12:35.010 06:37:30 -- nvmf/common.sh@469 -- # nvmfpid=78016 00:12:35.010 06:37:30 -- nvmf/common.sh@470 -- # waitforlisten 78016 00:12:35.010 06:37:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.010 06:37:30 -- common/autotest_common.sh@829 -- # '[' -z 78016 ']' 00:12:35.010 06:37:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.010 06:37:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.010 06:37:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.010 06:37:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.010 06:37:30 -- common/autotest_common.sh@10 -- # set +x 00:12:35.010 [2024-12-05 06:37:30.404536] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:35.010 [2024-12-05 06:37:30.404642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.269 [2024-12-05 06:37:30.545835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.269 [2024-12-05 06:37:30.581188] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:35.269 [2024-12-05 06:37:30.581355] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.269 [2024-12-05 06:37:30.581386] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.269 [2024-12-05 06:37:30.581402] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.269 [2024-12-05 06:37:30.581493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.269 [2024-12-05 06:37:30.581618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.269 [2024-12-05 06:37:30.582881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.269 [2024-12-05 06:37:30.582943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.205 06:37:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.205 06:37:31 -- common/autotest_common.sh@862 -- # return 0 00:12:36.205 06:37:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:36.205 06:37:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:36.205 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.205 06:37:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.205 06:37:31 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.205 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.205 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.205 [2024-12-05 06:37:31.389975] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.205 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.205 06:37:31 -- target/multiconnection.sh@21 -- # seq 1 11 00:12:36.205 06:37:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.205 06:37:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:36.205 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.205 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.205 Malloc1 00:12:36.205 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.205 06:37:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:12:36.205 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.205 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.205 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.205 06:37:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.205 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 [2024-12-05 06:37:31.455028] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.206 06:37:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 Malloc2 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.206 06:37:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 Malloc3 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.206 06:37:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 Malloc4 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.206 06:37:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 Malloc5 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.206 06:37:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 Malloc6 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.206 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.206 06:37:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.206 06:37:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:12:36.206 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.206 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 Malloc7 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.466 06:37:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 Malloc8 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.466 06:37:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 Malloc9 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.466 06:37:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 Malloc10 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.466 06:37:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 Malloc11 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:12:36.466 06:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.466 06:37:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 06:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.466 06:37:31 -- target/multiconnection.sh@28 -- # seq 1 11 00:12:36.466 06:37:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:36.466 06:37:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.726 06:37:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:12:36.726 06:37:32 -- common/autotest_common.sh@1187 -- # local i=0 00:12:36.726 06:37:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.726 06:37:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:36.726 06:37:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:38.630 06:37:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:38.630 06:37:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:38.630 06:37:34 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:12:38.630 06:37:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:38.630 06:37:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.630 06:37:34 -- common/autotest_common.sh@1197 -- # return 0 00:12:38.630 06:37:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:38.630 06:37:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:12:38.889 06:37:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:12:38.889 06:37:34 -- common/autotest_common.sh@1187 -- # local i=0 00:12:38.889 06:37:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.889 06:37:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:38.889 06:37:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:40.792 06:37:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:40.792 06:37:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:40.792 06:37:36 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:12:40.792 06:37:36 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:40.792 06:37:36 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.792 06:37:36 -- common/autotest_common.sh@1197 -- # return 0 00:12:40.792 06:37:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.792 06:37:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:12:41.061 06:37:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:12:41.061 06:37:36 -- common/autotest_common.sh@1187 -- # local i=0 00:12:41.061 06:37:36 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.061 06:37:36 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:41.061 06:37:36 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:42.969 06:37:38 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:42.969 06:37:38 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:42.969 06:37:38 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:12:42.969 06:37:38 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:42.969 06:37:38 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.969 06:37:38 -- common/autotest_common.sh@1197 -- # return 0 00:12:42.969 06:37:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:42.969 06:37:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:12:43.227 06:37:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:12:43.227 06:37:38 -- common/autotest_common.sh@1187 -- # local i=0 00:12:43.227 06:37:38 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.227 06:37:38 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:43.227 06:37:38 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:45.134 06:37:40 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:45.134 06:37:40 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:12:45.134 06:37:40 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:45.134 06:37:40 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:45.134 06:37:40 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.134 06:37:40 -- common/autotest_common.sh@1197 -- # return 0 00:12:45.134 06:37:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:45.134 06:37:40 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:12:45.393 06:37:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:12:45.393 06:37:40 -- common/autotest_common.sh@1187 -- # local i=0 00:12:45.393 06:37:40 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.393 06:37:40 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:45.394 06:37:40 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:47.303 06:37:42 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:47.303 06:37:42 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:47.303 06:37:42 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:12:47.303 06:37:42 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:47.303 06:37:42 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.303 06:37:42 -- common/autotest_common.sh@1197 -- # return 0 00:12:47.303 06:37:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:47.303 06:37:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:12:47.563 06:37:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:12:47.563 06:37:42 -- common/autotest_common.sh@1187 -- # local i=0 00:12:47.563 06:37:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.563 06:37:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:47.563 06:37:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:49.470 06:37:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:49.470 06:37:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:49.470 06:37:44 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:12:49.470 06:37:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:49.470 06:37:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.470 06:37:44 -- common/autotest_common.sh@1197 -- # return 0 00:12:49.470 06:37:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:49.470 06:37:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:12:49.729 06:37:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:12:49.729 06:37:45 -- common/autotest_common.sh@1187 -- # local i=0 00:12:49.729 06:37:45 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.729 06:37:45 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:49.729 06:37:45 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:51.633 06:37:47 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:51.633 06:37:47 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:51.633 06:37:47 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:12:51.633 06:37:47 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:51.633 06:37:47 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.633 06:37:47 -- common/autotest_common.sh@1197 -- # return 0 00:12:51.633 06:37:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:51.633 06:37:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:12:51.892 06:37:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:12:51.892 06:37:47 -- common/autotest_common.sh@1187 -- # local i=0 00:12:51.892 06:37:47 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.892 06:37:47 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:51.892 06:37:47 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:53.798 06:37:49 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:53.798 06:37:49 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:53.798 06:37:49 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:12:53.798 06:37:49 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:53.798 06:37:49 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.798 06:37:49 -- common/autotest_common.sh@1197 -- # return 0 00:12:53.798 06:37:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:53.798 06:37:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:12:54.057 06:37:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:12:54.058 06:37:49 -- common/autotest_common.sh@1187 -- # local i=0 00:12:54.058 06:37:49 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.058 06:37:49 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:54.058 06:37:49 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:55.966 06:37:51 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:55.966 06:37:51 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:55.966 06:37:51 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:12:55.966 06:37:51 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:55.966 06:37:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.966 06:37:51 -- common/autotest_common.sh@1197 -- # return 0 00:12:55.966 06:37:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:55.966 06:37:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:12:56.225 06:37:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:12:56.225 06:37:51 -- common/autotest_common.sh@1187 -- # local i=0 00:12:56.225 06:37:51 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.225 06:37:51 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:56.225 06:37:51 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:58.131 06:37:53 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:58.131 06:37:53 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:58.131 06:37:53 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:12:58.131 06:37:53 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:58.131 06:37:53 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.131 06:37:53 -- common/autotest_common.sh@1197 -- # return 0 00:12:58.131 06:37:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:58.131 06:37:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:12:58.390 06:37:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:12:58.390 06:37:53 -- common/autotest_common.sh@1187 -- # local i=0 00:12:58.390 06:37:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.390 06:37:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:58.390 06:37:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:00.310 06:37:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:00.310 06:37:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:00.310 06:37:55 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:13:00.310 06:37:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:00.310 06:37:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.310 06:37:55 -- common/autotest_common.sh@1197 -- # return 0 00:13:00.310 06:37:55 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:13:00.310 [global] 00:13:00.310 thread=1 00:13:00.310 invalidate=1 00:13:00.310 rw=read 00:13:00.310 time_based=1 00:13:00.310 runtime=10 00:13:00.310 ioengine=libaio 00:13:00.310 direct=1 00:13:00.310 bs=262144 00:13:00.310 iodepth=64 00:13:00.310 norandommap=1 00:13:00.310 numjobs=1 00:13:00.310 00:13:00.310 [job0] 00:13:00.310 filename=/dev/nvme0n1 00:13:00.310 [job1] 00:13:00.310 filename=/dev/nvme10n1 00:13:00.310 [job2] 00:13:00.310 filename=/dev/nvme1n1 00:13:00.310 [job3] 00:13:00.310 filename=/dev/nvme2n1 00:13:00.310 [job4] 00:13:00.310 filename=/dev/nvme3n1 00:13:00.310 [job5] 00:13:00.310 filename=/dev/nvme4n1 00:13:00.310 [job6] 00:13:00.310 filename=/dev/nvme5n1 00:13:00.310 [job7] 00:13:00.310 filename=/dev/nvme6n1 00:13:00.310 [job8] 00:13:00.310 filename=/dev/nvme7n1 00:13:00.569 [job9] 00:13:00.569 filename=/dev/nvme8n1 00:13:00.569 [job10] 00:13:00.569 filename=/dev/nvme9n1 00:13:00.569 Could not set queue depth (nvme0n1) 00:13:00.569 Could not set queue depth (nvme10n1) 00:13:00.569 Could not set queue depth (nvme1n1) 00:13:00.569 Could not set queue depth (nvme2n1) 00:13:00.569 Could not set queue depth (nvme3n1) 00:13:00.569 Could not set queue depth (nvme4n1) 00:13:00.569 Could not set queue depth (nvme5n1) 00:13:00.569 Could not set queue depth (nvme6n1) 00:13:00.569 Could not set queue depth (nvme7n1) 00:13:00.569 Could not set queue depth (nvme8n1) 00:13:00.569 Could not set queue depth (nvme9n1) 00:13:00.828 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:00.828 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:00.828 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:00.828 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:00.828 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:00.828 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:00.828 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:00.828 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:00.828 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:00.828 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:00.828 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:00.828 fio-3.35 00:13:00.828 Starting 11 threads 00:13:13.056 00:13:13.056 job0: (groupid=0, jobs=1): err= 0: pid=78475: Thu Dec 5 06:38:06 2024 00:13:13.056 read: IOPS=532, BW=133MiB/s (140MB/s)(1345MiB/10097msec) 00:13:13.056 slat (usec): min=19, max=49616, avg=1843.13, stdev=4588.72 00:13:13.056 clat (msec): min=12, max=213, avg=118.15, stdev=15.30 00:13:13.056 lat (msec): min=16, max=213, avg=119.99, stdev=15.79 00:13:13.056 clat percentiles (msec): 00:13:13.056 | 1.00th=[ 59], 5.00th=[ 87], 10.00th=[ 111], 20.00th=[ 115], 00:13:13.056 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 121], 60.00th=[ 122], 00:13:13.056 | 70.00th=[ 124], 80.00th=[ 125], 90.00th=[ 128], 95.00th=[ 133], 00:13:13.056 | 99.00th=[ 148], 99.50th=[ 180], 99.90th=[ 205], 99.95th=[ 205], 00:13:13.056 | 99.99th=[ 213] 00:13:13.056 bw ( KiB/s): min=127488, max=174592, per=6.55%, avg=136053.20, stdev=11116.51, samples=20 00:13:13.056 iops : min= 498, max= 682, avg=531.35, stdev=43.43, samples=20 00:13:13.056 lat (msec) : 20=0.17%, 50=0.74%, 100=6.45%, 250=92.64% 00:13:13.056 cpu : usr=0.28%, sys=2.06%, ctx=1327, majf=0, minf=4097 00:13:13.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:13.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:13.056 issued rwts: total=5379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.056 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:13.056 job1: (groupid=0, jobs=1): err= 0: pid=78476: Thu Dec 5 06:38:06 2024 00:13:13.056 read: IOPS=986, BW=247MiB/s (259MB/s)(2469MiB/10013msec) 00:13:13.056 slat (usec): min=20, max=31569, avg=1008.47, stdev=2228.36 00:13:13.056 clat (msec): min=11, max=127, avg=63.82, stdev= 9.02 00:13:13.056 lat (msec): min=15, max=127, avg=64.83, stdev= 9.06 00:13:13.056 clat percentiles (msec): 00:13:13.056 | 1.00th=[ 49], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 59], 00:13:13.056 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 65], 00:13:13.056 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 71], 95.00th=[ 75], 00:13:13.056 | 99.00th=[ 105], 99.50th=[ 115], 99.90th=[ 126], 99.95th=[ 127], 00:13:13.056 | 99.99th=[ 128] 00:13:13.056 bw ( KiB/s): min=160064, max=264704, per=12.09%, avg=251151.95, stdev=22573.88, samples=20 00:13:13.056 iops : min= 625, max= 1034, avg=981.00, stdev=88.22, samples=20 00:13:13.056 lat (msec) : 20=0.09%, 50=1.52%, 100=97.05%, 250=1.34% 00:13:13.056 cpu : usr=0.37%, sys=3.58%, ctx=2159, majf=0, minf=4097 00:13:13.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:13.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:13.056 issued rwts: total=9874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.056 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:13.056 job2: (groupid=0, jobs=1): err= 0: pid=78477: Thu Dec 5 06:38:06 2024 00:13:13.056 read: IOPS=991, BW=248MiB/s (260MB/s)(2483MiB/10017msec) 00:13:13.056 slat (usec): min=20, max=18185, avg=998.87, stdev=2198.70 00:13:13.056 clat (msec): min=9, max=118, avg=63.45, stdev= 8.22 00:13:13.056 lat (msec): min=15, max=118, avg=64.45, stdev= 8.26 00:13:13.056 clat percentiles (msec): 00:13:13.056 | 1.00th=[ 45], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 59], 00:13:13.056 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 65], 00:13:13.056 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 71], 95.00th=[ 75], 00:13:13.056 | 99.00th=[ 94], 99.50th=[ 104], 99.90th=[ 112], 99.95th=[ 114], 00:13:13.056 | 99.99th=[ 120] 00:13:13.056 bw ( KiB/s): min=180736, max=265728, per=12.15%, avg=252569.35, stdev=18629.95, samples=20 00:13:13.056 iops : min= 706, max= 1038, avg=986.55, stdev=72.75, samples=20 00:13:13.056 lat (msec) : 10=0.01%, 20=0.13%, 50=2.35%, 100=96.90%, 250=0.61% 00:13:13.056 cpu : usr=0.41%, sys=3.58%, ctx=2155, majf=0, minf=4097 00:13:13.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:13.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:13.056 issued rwts: total=9931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.056 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:13.056 job3: (groupid=0, jobs=1): err= 0: pid=78478: Thu Dec 5 06:38:06 2024 00:13:13.056 read: IOPS=529, BW=132MiB/s (139MB/s)(1337MiB/10099msec) 00:13:13.056 slat (usec): min=21, max=47952, avg=1853.34, stdev=4492.01 00:13:13.056 clat (msec): min=56, max=211, avg=118.84, stdev=12.45 00:13:13.056 lat (msec): min=60, max=211, avg=120.69, stdev=12.93 00:13:13.056 clat percentiles (msec): 00:13:13.056 | 1.00th=[ 68], 5.00th=[ 95], 10.00th=[ 111], 20.00th=[ 116], 00:13:13.056 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 121], 60.00th=[ 122], 00:13:13.056 | 70.00th=[ 124], 80.00th=[ 125], 90.00th=[ 128], 95.00th=[ 131], 00:13:13.056 | 99.00th=[ 148], 99.50th=[ 174], 99.90th=[ 197], 99.95th=[ 209], 00:13:13.056 | 99.99th=[ 211] 00:13:13.056 bw ( KiB/s): min=128512, max=158012, per=6.51%, avg=135285.30, stdev=7198.21, samples=20 00:13:13.056 iops : min= 502, max= 617, avg=528.35, stdev=28.12, samples=20 00:13:13.056 lat (msec) : 100=6.09%, 250=93.91% 00:13:13.056 cpu : usr=0.42%, sys=2.13%, ctx=1318, majf=0, minf=4097 00:13:13.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:13.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:13.056 issued rwts: total=5349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.057 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:13.057 job4: (groupid=0, jobs=1): err= 0: pid=78479: Thu Dec 5 06:38:06 2024 00:13:13.057 read: IOPS=544, BW=136MiB/s (143MB/s)(1375MiB/10094msec) 00:13:13.057 slat (usec): min=20, max=33889, avg=1814.42, stdev=4098.78 00:13:13.057 clat (msec): min=18, max=210, avg=115.46, stdev=19.41 00:13:13.057 lat (msec): min=19, max=210, avg=117.27, stdev=19.83 00:13:13.057 clat percentiles (msec): 00:13:13.057 | 1.00th=[ 39], 5.00th=[ 66], 10.00th=[ 87], 20.00th=[ 115], 00:13:13.057 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 121], 60.00th=[ 122], 00:13:13.057 | 70.00th=[ 124], 80.00th=[ 125], 90.00th=[ 129], 95.00th=[ 131], 00:13:13.057 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 205], 99.95th=[ 211], 00:13:13.057 | 99.99th=[ 211] 00:13:13.057 bw ( KiB/s): min=129024, max=228809, per=6.70%, avg=139170.10, stdev=21872.63, samples=20 00:13:13.057 iops : min= 504, max= 893, avg=543.45, stdev=85.31, samples=20 00:13:13.057 lat (msec) : 20=0.02%, 50=1.40%, 100=9.69%, 250=88.89% 00:13:13.057 cpu : usr=0.29%, sys=2.13%, ctx=1347, majf=0, minf=4097 00:13:13.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:13.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:13.057 issued rwts: total=5501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.057 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:13.057 job5: (groupid=0, jobs=1): err= 0: pid=78482: Thu Dec 5 06:38:06 2024 00:13:13.057 read: IOPS=542, BW=136MiB/s (142MB/s)(1370MiB/10101msec) 00:13:13.057 slat (usec): min=20, max=45121, avg=1821.49, stdev=4465.46 00:13:13.057 clat (msec): min=15, max=218, avg=116.00, stdev=19.41 00:13:13.057 lat (msec): min=16, max=224, avg=117.82, stdev=19.92 00:13:13.057 clat percentiles (msec): 00:13:13.057 | 1.00th=[ 51], 5.00th=[ 65], 10.00th=[ 85], 20.00th=[ 116], 00:13:13.057 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 121], 60.00th=[ 123], 00:13:13.057 | 70.00th=[ 124], 80.00th=[ 126], 90.00th=[ 129], 95.00th=[ 132], 00:13:13.057 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 194], 99.95th=[ 220], 00:13:13.057 | 99.99th=[ 220] 00:13:13.057 bw ( KiB/s): min=127488, max=230400, per=6.67%, avg=138597.75, stdev=22309.34, samples=20 00:13:13.057 iops : min= 498, max= 900, avg=541.30, stdev=87.18, samples=20 00:13:13.057 lat (msec) : 20=0.11%, 50=0.84%, 100=10.02%, 250=89.03% 00:13:13.057 cpu : usr=0.22%, sys=2.01%, ctx=1302, majf=0, minf=4097 00:13:13.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:13.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:13.057 issued rwts: total=5478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.057 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:13.057 job6: (groupid=0, jobs=1): err= 0: pid=78487: Thu Dec 5 06:38:06 2024 00:13:13.057 read: IOPS=865, BW=216MiB/s (227MB/s)(2180MiB/10075msec) 00:13:13.057 slat (usec): min=20, max=28329, avg=1142.29, stdev=2459.72 00:13:13.057 clat (msec): min=23, max=165, avg=72.69, stdev=14.54 00:13:13.057 lat (msec): min=23, max=165, avg=73.83, stdev=14.74 00:13:13.057 clat percentiles (msec): 00:13:13.057 | 1.00th=[ 50], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 61], 00:13:13.057 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 73], 00:13:13.057 | 70.00th=[ 86], 80.00th=[ 89], 90.00th=[ 92], 95.00th=[ 95], 00:13:13.057 | 99.00th=[ 102], 99.50th=[ 106], 99.90th=[ 146], 99.95th=[ 159], 00:13:13.057 | 99.99th=[ 165] 00:13:13.057 bw ( KiB/s): min=172544, max=262144, per=10.66%, avg=221557.55, stdev=37482.75, samples=20 00:13:13.057 iops : min= 674, max= 1024, avg=865.40, stdev=146.49, samples=20 00:13:13.057 lat (msec) : 50=1.27%, 100=97.53%, 250=1.19% 00:13:13.057 cpu : usr=0.46%, sys=3.11%, ctx=2030, majf=0, minf=4097 00:13:13.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:13.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:13.057 issued rwts: total=8719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.057 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:13.057 job7: (groupid=0, jobs=1): err= 0: pid=78488: Thu Dec 5 06:38:06 2024 00:13:13.057 read: IOPS=699, BW=175MiB/s (183MB/s)(1762MiB/10071msec) 00:13:13.057 slat (usec): min=14, max=31518, avg=1402.58, stdev=2953.75 00:13:13.057 clat (msec): min=35, max=160, avg=89.93, stdev= 7.50 00:13:13.057 lat (msec): min=35, max=160, avg=91.33, stdev= 7.64 00:13:13.057 clat percentiles (msec): 00:13:13.057 | 1.00th=[ 53], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 87], 00:13:13.057 | 30.00th=[ 88], 40.00th=[ 89], 50.00th=[ 90], 60.00th=[ 91], 00:13:13.057 | 70.00th=[ 92], 80.00th=[ 94], 90.00th=[ 96], 95.00th=[ 99], 00:13:13.057 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 148], 99.95th=[ 159], 00:13:13.057 | 99.99th=[ 161] 00:13:13.057 bw ( KiB/s): min=157696, max=194048, per=8.60%, avg=178798.00, stdev=6585.67, samples=20 00:13:13.057 iops : min= 616, max= 758, avg=698.35, stdev=25.75, samples=20 00:13:13.057 lat (msec) : 50=0.57%, 100=95.84%, 250=3.59% 00:13:13.057 cpu : usr=0.33%, sys=2.96%, ctx=1726, majf=0, minf=4097 00:13:13.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:13.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:13.057 issued rwts: total=7048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.057 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:13.057 job8: (groupid=0, jobs=1): err= 0: pid=78489: Thu Dec 5 06:38:06 2024 00:13:13.057 read: IOPS=803, BW=201MiB/s (211MB/s)(2016MiB/10039msec) 00:13:13.057 slat (usec): min=20, max=30903, avg=1236.20, stdev=2703.71 00:13:13.057 clat (msec): min=18, max=115, avg=78.33, stdev=14.36 00:13:13.057 lat (msec): min=18, max=115, avg=79.57, stdev=14.61 00:13:13.057 clat percentiles (msec): 00:13:13.057 | 1.00th=[ 53], 5.00th=[ 58], 10.00th=[ 60], 20.00th=[ 64], 00:13:13.057 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 84], 60.00th=[ 88], 00:13:13.057 | 70.00th=[ 90], 80.00th=[ 92], 90.00th=[ 95], 95.00th=[ 97], 00:13:13.057 | 99.00th=[ 104], 99.50th=[ 106], 99.90th=[ 108], 99.95th=[ 110], 00:13:13.057 | 99.99th=[ 116] 00:13:13.057 bw ( KiB/s): min=172544, max=256000, per=9.86%, avg=204792.95, stdev=36027.73, samples=20 00:13:13.057 iops : min= 674, max= 1000, avg=799.95, stdev=140.74, samples=20 00:13:13.057 lat (msec) : 20=0.04%, 50=0.51%, 100=97.61%, 250=1.85% 00:13:13.057 cpu : usr=0.45%, sys=2.86%, ctx=1899, majf=0, minf=4097 00:13:13.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:13.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:13.057 issued rwts: total=8064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.057 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:13.057 job9: (groupid=0, jobs=1): err= 0: pid=78490: Thu Dec 5 06:38:06 2024 00:13:13.057 read: IOPS=857, BW=214MiB/s (225MB/s)(2160MiB/10075msec) 00:13:13.057 slat (usec): min=21, max=43038, avg=1152.62, stdev=2485.93 00:13:13.057 clat (msec): min=22, max=156, avg=73.35, stdev=15.09 00:13:13.057 lat (msec): min=23, max=169, avg=74.50, stdev=15.29 00:13:13.057 clat percentiles (msec): 00:13:13.057 | 1.00th=[ 52], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 61], 00:13:13.057 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 73], 00:13:13.057 | 70.00th=[ 86], 80.00th=[ 90], 90.00th=[ 93], 95.00th=[ 96], 00:13:13.057 | 99.00th=[ 111], 99.50th=[ 118], 99.90th=[ 150], 99.95th=[ 157], 00:13:13.057 | 99.99th=[ 157] 00:13:13.057 bw ( KiB/s): min=156160, max=264704, per=10.57%, avg=219561.45, stdev=38916.02, samples=20 00:13:13.057 iops : min= 610, max= 1034, avg=857.60, stdev=152.09, samples=20 00:13:13.057 lat (msec) : 50=0.37%, 100=96.99%, 250=2.64% 00:13:13.057 cpu : usr=0.55%, sys=3.14%, ctx=1989, majf=0, minf=4097 00:13:13.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:13.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:13.057 issued rwts: total=8641,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.057 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:13.057 job10: (groupid=0, jobs=1): err= 0: pid=78491: Thu Dec 5 06:38:06 2024 00:13:13.057 read: IOPS=797, BW=199MiB/s (209MB/s)(2002MiB/10039msec) 00:13:13.057 slat (usec): min=21, max=46727, avg=1244.64, stdev=2757.80 00:13:13.057 clat (msec): min=17, max=134, avg=78.90, stdev=15.25 00:13:13.057 lat (msec): min=18, max=140, avg=80.15, stdev=15.48 00:13:13.057 clat percentiles (msec): 00:13:13.057 | 1.00th=[ 53], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 64], 00:13:13.057 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 85], 60.00th=[ 88], 00:13:13.057 | 70.00th=[ 91], 80.00th=[ 93], 90.00th=[ 95], 95.00th=[ 99], 00:13:13.057 | 99.00th=[ 113], 99.50th=[ 120], 99.90th=[ 134], 99.95th=[ 134], 00:13:13.057 | 99.99th=[ 136] 00:13:13.057 bw ( KiB/s): min=153600, max=256512, per=9.78%, avg=203281.05, stdev=37717.82, samples=20 00:13:13.057 iops : min= 600, max= 1002, avg=794.00, stdev=147.38, samples=20 00:13:13.057 lat (msec) : 20=0.02%, 50=0.49%, 100=95.74%, 250=3.75% 00:13:13.057 cpu : usr=0.31%, sys=3.32%, ctx=1846, majf=0, minf=4097 00:13:13.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:13.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:13.058 issued rwts: total=8006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.058 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:13.058 00:13:13.058 Run status group 0 (all jobs): 00:13:13.058 READ: bw=2029MiB/s (2128MB/s), 132MiB/s-248MiB/s (139MB/s-260MB/s), io=20.0GiB (21.5GB), run=10013-10101msec 00:13:13.058 00:13:13.058 Disk stats (read/write): 00:13:13.058 nvme0n1: ios=10640/0, merge=0/0, ticks=1224636/0, in_queue=1224636, util=97.85% 00:13:13.058 nvme10n1: ios=19665/0, merge=0/0, ticks=1238173/0, in_queue=1238173, util=97.87% 00:13:13.058 nvme1n1: ios=19778/0, merge=0/0, ticks=1237054/0, in_queue=1237054, util=98.16% 00:13:13.058 nvme2n1: ios=10576/0, merge=0/0, ticks=1226369/0, in_queue=1226369, util=98.30% 00:13:13.058 nvme3n1: ios=10878/0, merge=0/0, ticks=1223522/0, in_queue=1223522, util=98.27% 00:13:13.058 nvme4n1: ios=10839/0, merge=0/0, ticks=1228022/0, in_queue=1228022, util=98.49% 00:13:13.058 nvme5n1: ios=17329/0, merge=0/0, ticks=1231546/0, in_queue=1231546, util=98.64% 00:13:13.058 nvme6n1: ios=13975/0, merge=0/0, ticks=1228925/0, in_queue=1228925, util=98.63% 00:13:13.058 nvme7n1: ios=16010/0, merge=0/0, ticks=1231079/0, in_queue=1231079, util=98.91% 00:13:13.058 nvme8n1: ios=17173/0, merge=0/0, ticks=1231271/0, in_queue=1231271, util=99.11% 00:13:13.058 nvme9n1: ios=15894/0, merge=0/0, ticks=1232096/0, in_queue=1232096, util=99.16% 00:13:13.058 06:38:06 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:13:13.058 [global] 00:13:13.058 thread=1 00:13:13.058 invalidate=1 00:13:13.058 rw=randwrite 00:13:13.058 time_based=1 00:13:13.058 runtime=10 00:13:13.058 ioengine=libaio 00:13:13.058 direct=1 00:13:13.058 bs=262144 00:13:13.058 iodepth=64 00:13:13.058 norandommap=1 00:13:13.058 numjobs=1 00:13:13.058 00:13:13.058 [job0] 00:13:13.058 filename=/dev/nvme0n1 00:13:13.058 [job1] 00:13:13.058 filename=/dev/nvme10n1 00:13:13.058 [job2] 00:13:13.058 filename=/dev/nvme1n1 00:13:13.058 [job3] 00:13:13.058 filename=/dev/nvme2n1 00:13:13.058 [job4] 00:13:13.058 filename=/dev/nvme3n1 00:13:13.058 [job5] 00:13:13.058 filename=/dev/nvme4n1 00:13:13.058 [job6] 00:13:13.058 filename=/dev/nvme5n1 00:13:13.058 [job7] 00:13:13.058 filename=/dev/nvme6n1 00:13:13.058 [job8] 00:13:13.058 filename=/dev/nvme7n1 00:13:13.058 [job9] 00:13:13.058 filename=/dev/nvme8n1 00:13:13.058 [job10] 00:13:13.058 filename=/dev/nvme9n1 00:13:13.058 Could not set queue depth (nvme0n1) 00:13:13.058 Could not set queue depth (nvme10n1) 00:13:13.058 Could not set queue depth (nvme1n1) 00:13:13.058 Could not set queue depth (nvme2n1) 00:13:13.058 Could not set queue depth (nvme3n1) 00:13:13.058 Could not set queue depth (nvme4n1) 00:13:13.058 Could not set queue depth (nvme5n1) 00:13:13.058 Could not set queue depth (nvme6n1) 00:13:13.058 Could not set queue depth (nvme7n1) 00:13:13.058 Could not set queue depth (nvme8n1) 00:13:13.058 Could not set queue depth (nvme9n1) 00:13:13.058 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:13.058 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:13.058 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:13.058 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:13.058 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:13.058 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:13.058 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:13.058 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:13.058 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:13.058 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:13.058 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:13.058 fio-3.35 00:13:13.058 Starting 11 threads 00:13:23.042 00:13:23.042 job0: (groupid=0, jobs=1): err= 0: pid=78685: Thu Dec 5 06:38:17 2024 00:13:23.042 write: IOPS=310, BW=77.5MiB/s (81.3MB/s)(790MiB/10195msec); 0 zone resets 00:13:23.042 slat (usec): min=21, max=31984, avg=3160.26, stdev=5558.82 00:13:23.042 clat (msec): min=13, max=410, avg=203.17, stdev=33.65 00:13:23.042 lat (msec): min=13, max=410, avg=206.33, stdev=33.70 00:13:23.042 clat percentiles (msec): 00:13:23.042 | 1.00th=[ 57], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 188], 00:13:23.042 | 30.00th=[ 194], 40.00th=[ 205], 50.00th=[ 209], 60.00th=[ 220], 00:13:23.042 | 70.00th=[ 222], 80.00th=[ 222], 90.00th=[ 226], 95.00th=[ 228], 00:13:23.042 | 99.00th=[ 296], 99.50th=[ 355], 99.90th=[ 397], 99.95th=[ 409], 00:13:23.042 | 99.99th=[ 409] 00:13:23.043 bw ( KiB/s): min=71680, max=102400, per=5.47%, avg=79278.10, stdev=9254.42, samples=20 00:13:23.043 iops : min= 280, max= 400, avg=309.60, stdev=36.18, samples=20 00:13:23.043 lat (msec) : 20=0.16%, 50=0.60%, 100=0.89%, 250=96.90%, 500=1.46% 00:13:23.043 cpu : usr=0.57%, sys=0.92%, ctx=3484, majf=0, minf=1 00:13:23.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:13:23.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:23.043 issued rwts: total=0,3161,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:23.043 job1: (groupid=0, jobs=1): err= 0: pid=78686: Thu Dec 5 06:38:17 2024 00:13:23.043 write: IOPS=424, BW=106MiB/s (111MB/s)(1081MiB/10189msec); 0 zone resets 00:13:23.043 slat (usec): min=17, max=25092, avg=2289.43, stdev=4418.68 00:13:23.043 clat (msec): min=12, max=409, avg=148.45, stdev=65.53 00:13:23.043 lat (msec): min=12, max=409, avg=150.74, stdev=66.38 00:13:23.043 clat percentiles (msec): 00:13:23.043 | 1.00th=[ 66], 5.00th=[ 87], 10.00th=[ 88], 20.00th=[ 92], 00:13:23.043 | 30.00th=[ 93], 40.00th=[ 93], 50.00th=[ 94], 60.00th=[ 207], 00:13:23.043 | 70.00th=[ 218], 80.00th=[ 222], 90.00th=[ 222], 95.00th=[ 224], 00:13:23.043 | 99.00th=[ 255], 99.50th=[ 338], 99.90th=[ 397], 99.95th=[ 397], 00:13:23.043 | 99.99th=[ 409] 00:13:23.043 bw ( KiB/s): min=70144, max=178176, per=7.52%, avg=109074.20, stdev=48847.47, samples=20 00:13:23.043 iops : min= 274, max= 696, avg=426.05, stdev=190.83, samples=20 00:13:23.043 lat (msec) : 20=0.19%, 50=0.56%, 100=52.47%, 250=45.72%, 500=1.06% 00:13:23.043 cpu : usr=0.77%, sys=1.33%, ctx=5141, majf=0, minf=1 00:13:23.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:13:23.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:23.043 issued rwts: total=0,4324,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:23.043 job2: (groupid=0, jobs=1): err= 0: pid=78698: Thu Dec 5 06:38:17 2024 00:13:23.043 write: IOPS=307, BW=76.8MiB/s (80.5MB/s)(782MiB/10186msec); 0 zone resets 00:13:23.043 slat (usec): min=16, max=31456, avg=3189.59, stdev=5638.22 00:13:23.043 clat (msec): min=19, max=409, avg=205.07, stdev=35.07 00:13:23.043 lat (msec): min=19, max=409, avg=208.26, stdev=35.16 00:13:23.043 clat percentiles (msec): 00:13:23.043 | 1.00th=[ 62], 5.00th=[ 150], 10.00th=[ 161], 20.00th=[ 188], 00:13:23.043 | 30.00th=[ 199], 40.00th=[ 207], 50.00th=[ 211], 60.00th=[ 220], 00:13:23.043 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 232], 95.00th=[ 236], 00:13:23.043 | 99.00th=[ 296], 99.50th=[ 355], 99.90th=[ 397], 99.95th=[ 409], 00:13:23.043 | 99.99th=[ 409] 00:13:23.043 bw ( KiB/s): min=69493, max=106496, per=5.41%, avg=78482.65, stdev=10243.81, samples=20 00:13:23.043 iops : min= 271, max= 416, avg=306.55, stdev=40.04, samples=20 00:13:23.043 lat (msec) : 20=0.13%, 50=0.64%, 100=0.77%, 250=97.00%, 500=1.47% 00:13:23.043 cpu : usr=0.60%, sys=0.99%, ctx=3768, majf=0, minf=1 00:13:23.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:13:23.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:23.043 issued rwts: total=0,3129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:23.043 job3: (groupid=0, jobs=1): err= 0: pid=78699: Thu Dec 5 06:38:17 2024 00:13:23.043 write: IOPS=679, BW=170MiB/s (178MB/s)(1713MiB/10083msec); 0 zone resets 00:13:23.043 slat (usec): min=18, max=35743, avg=1440.17, stdev=2513.49 00:13:23.043 clat (msec): min=20, max=177, avg=92.72, stdev=11.76 00:13:23.043 lat (msec): min=22, max=178, avg=94.16, stdev=11.69 00:13:23.043 clat percentiles (msec): 00:13:23.043 | 1.00th=[ 69], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:13:23.043 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 91], 60.00th=[ 92], 00:13:23.043 | 70.00th=[ 93], 80.00th=[ 93], 90.00th=[ 96], 95.00th=[ 124], 00:13:23.043 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 165], 99.95th=[ 171], 00:13:23.043 | 99.99th=[ 178] 00:13:23.043 bw ( KiB/s): min=123126, max=185856, per=11.98%, avg=173767.10, stdev=16531.67, samples=20 00:13:23.043 iops : min= 480, max= 726, avg=678.70, stdev=64.72, samples=20 00:13:23.043 lat (msec) : 50=0.57%, 100=90.99%, 250=8.44% 00:13:23.043 cpu : usr=0.94%, sys=1.38%, ctx=8548, majf=0, minf=1 00:13:23.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:23.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:23.043 issued rwts: total=0,6851,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:23.043 job4: (groupid=0, jobs=1): err= 0: pid=78700: Thu Dec 5 06:38:17 2024 00:13:23.043 write: IOPS=679, BW=170MiB/s (178MB/s)(1713MiB/10079msec); 0 zone resets 00:13:23.043 slat (usec): min=18, max=71049, avg=1453.35, stdev=2596.85 00:13:23.043 clat (msec): min=73, max=182, avg=92.65, stdev=11.55 00:13:23.043 lat (msec): min=73, max=182, avg=94.10, stdev=11.45 00:13:23.043 clat percentiles (msec): 00:13:23.043 | 1.00th=[ 84], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 87], 00:13:23.043 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 91], 00:13:23.043 | 70.00th=[ 92], 80.00th=[ 92], 90.00th=[ 94], 95.00th=[ 127], 00:13:23.043 | 99.00th=[ 134], 99.50th=[ 146], 99.90th=[ 174], 99.95th=[ 176], 00:13:23.043 | 99.99th=[ 182] 00:13:23.043 bw ( KiB/s): min=111104, max=182930, per=11.98%, avg=173769.50, stdev=19281.06, samples=20 00:13:23.043 iops : min= 434, max= 714, avg=678.70, stdev=75.28, samples=20 00:13:23.043 lat (msec) : 100=91.16%, 250=8.84% 00:13:23.043 cpu : usr=1.28%, sys=1.91%, ctx=7809, majf=0, minf=1 00:13:23.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:23.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:23.043 issued rwts: total=0,6853,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:23.043 job5: (groupid=0, jobs=1): err= 0: pid=78701: Thu Dec 5 06:38:17 2024 00:13:23.043 write: IOPS=343, BW=86.0MiB/s (90.2MB/s)(877MiB/10193msec); 0 zone resets 00:13:23.043 slat (usec): min=17, max=25890, avg=2817.77, stdev=5192.84 00:13:23.043 clat (msec): min=12, max=407, avg=183.17, stdev=56.52 00:13:23.043 lat (msec): min=12, max=407, avg=185.99, stdev=57.19 00:13:23.043 clat percentiles (msec): 00:13:23.043 | 1.00th=[ 51], 5.00th=[ 88], 10.00th=[ 92], 20.00th=[ 95], 00:13:23.043 | 30.00th=[ 186], 40.00th=[ 197], 50.00th=[ 207], 60.00th=[ 218], 00:13:23.043 | 70.00th=[ 222], 80.00th=[ 222], 90.00th=[ 226], 95.00th=[ 228], 00:13:23.043 | 99.00th=[ 279], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 409], 00:13:23.043 | 99.99th=[ 409] 00:13:23.043 bw ( KiB/s): min=71680, max=176640, per=6.08%, avg=88133.40, stdev=31687.21, samples=20 00:13:23.043 iops : min= 280, max= 690, avg=344.25, stdev=123.79, samples=20 00:13:23.043 lat (msec) : 20=0.17%, 50=0.83%, 100=20.91%, 250=76.78%, 500=1.31% 00:13:23.043 cpu : usr=0.63%, sys=1.00%, ctx=3939, majf=0, minf=1 00:13:23.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:13:23.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:23.043 issued rwts: total=0,3506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:23.043 job6: (groupid=0, jobs=1): err= 0: pid=78702: Thu Dec 5 06:38:17 2024 00:13:23.043 write: IOPS=306, BW=76.7MiB/s (80.5MB/s)(782MiB/10189msec); 0 zone resets 00:13:23.043 slat (usec): min=18, max=73076, avg=3190.31, stdev=5701.01 00:13:23.043 clat (msec): min=12, max=409, avg=205.09, stdev=29.08 00:13:23.043 lat (msec): min=12, max=409, avg=208.28, stdev=28.95 00:13:23.043 clat percentiles (msec): 00:13:23.043 | 1.00th=[ 146], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 188], 00:13:23.043 | 30.00th=[ 194], 40.00th=[ 207], 50.00th=[ 211], 60.00th=[ 220], 00:13:23.043 | 70.00th=[ 222], 80.00th=[ 222], 90.00th=[ 226], 95.00th=[ 228], 00:13:23.043 | 99.00th=[ 296], 99.50th=[ 355], 99.90th=[ 397], 99.95th=[ 409], 00:13:23.043 | 99.99th=[ 409] 00:13:23.043 bw ( KiB/s): min=70144, max=100864, per=5.41%, avg=78456.60, stdev=7522.24, samples=20 00:13:23.043 iops : min= 274, max= 394, avg=306.45, stdev=29.40, samples=20 00:13:23.043 lat (msec) : 20=0.06%, 50=0.13%, 100=0.26%, 250=98.08%, 500=1.47% 00:13:23.043 cpu : usr=0.46%, sys=1.07%, ctx=2896, majf=0, minf=1 00:13:23.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:13:23.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:23.043 issued rwts: total=0,3128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.043 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:23.043 job7: (groupid=0, jobs=1): err= 0: pid=78703: Thu Dec 5 06:38:17 2024 00:13:23.043 write: IOPS=309, BW=77.5MiB/s (81.2MB/s)(789MiB/10183msec); 0 zone resets 00:13:23.043 slat (usec): min=19, max=46079, avg=3162.59, stdev=5571.86 00:13:23.043 clat (msec): min=48, max=406, avg=203.25, stdev=30.46 00:13:23.043 lat (msec): min=48, max=406, avg=206.41, stdev=30.42 00:13:23.043 clat percentiles (msec): 00:13:23.043 | 1.00th=[ 120], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 182], 00:13:23.043 | 30.00th=[ 192], 40.00th=[ 205], 50.00th=[ 211], 60.00th=[ 220], 00:13:23.043 | 70.00th=[ 222], 80.00th=[ 222], 90.00th=[ 226], 95.00th=[ 228], 00:13:23.043 | 99.00th=[ 296], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:13:23.043 | 99.99th=[ 405] 00:13:23.043 bw ( KiB/s): min=70656, max=102400, per=5.46%, avg=79158.80, stdev=8806.68, samples=20 00:13:23.043 iops : min= 276, max= 400, avg=309.15, stdev=34.45, samples=20 00:13:23.043 lat (msec) : 50=0.10%, 100=0.76%, 250=97.72%, 500=1.43% 00:13:23.043 cpu : usr=0.52%, sys=0.97%, ctx=4184, majf=0, minf=1 00:13:23.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:13:23.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:23.043 issued rwts: total=0,3156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.044 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:23.044 job8: (groupid=0, jobs=1): err= 0: pid=78708: Thu Dec 5 06:38:17 2024 00:13:23.044 write: IOPS=692, BW=173MiB/s (182MB/s)(1746MiB/10083msec); 0 zone resets 00:13:23.044 slat (usec): min=19, max=29503, avg=1426.34, stdev=2431.84 00:13:23.044 clat (msec): min=31, max=178, avg=90.96, stdev= 6.79 00:13:23.044 lat (msec): min=31, max=178, avg=92.39, stdev= 6.47 00:13:23.044 clat percentiles (msec): 00:13:23.044 | 1.00th=[ 84], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:13:23.044 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 91], 60.00th=[ 92], 00:13:23.044 | 70.00th=[ 93], 80.00th=[ 93], 90.00th=[ 94], 95.00th=[ 96], 00:13:23.044 | 99.00th=[ 118], 99.50th=[ 125], 99.90th=[ 167], 99.95th=[ 171], 00:13:23.044 | 99.99th=[ 178] 00:13:23.044 bw ( KiB/s): min=163840, max=182272, per=12.22%, avg=177133.95, stdev=5088.38, samples=20 00:13:23.044 iops : min= 640, max= 712, avg=691.90, stdev=19.86, samples=20 00:13:23.044 lat (msec) : 50=0.23%, 100=97.52%, 250=2.25% 00:13:23.044 cpu : usr=1.28%, sys=1.87%, ctx=8419, majf=0, minf=1 00:13:23.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:23.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:23.044 issued rwts: total=0,6983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.044 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:23.044 job9: (groupid=0, jobs=1): err= 0: pid=78709: Thu Dec 5 06:38:17 2024 00:13:23.044 write: IOPS=682, BW=171MiB/s (179MB/s)(1721MiB/10086msec); 0 zone resets 00:13:23.044 slat (usec): min=17, max=36569, avg=1449.52, stdev=2512.05 00:13:23.044 clat (msec): min=13, max=174, avg=92.31, stdev=12.68 00:13:23.044 lat (msec): min=13, max=174, avg=93.76, stdev=12.62 00:13:23.044 clat percentiles (msec): 00:13:23.044 | 1.00th=[ 83], 5.00th=[ 85], 10.00th=[ 85], 20.00th=[ 87], 00:13:23.044 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 91], 00:13:23.044 | 70.00th=[ 92], 80.00th=[ 92], 90.00th=[ 94], 95.00th=[ 128], 00:13:23.044 | 99.00th=[ 134], 99.50th=[ 153], 99.90th=[ 163], 99.95th=[ 169], 00:13:23.044 | 99.99th=[ 176] 00:13:23.044 bw ( KiB/s): min=126976, max=184320, per=12.04%, avg=174561.00, stdev=16791.16, samples=20 00:13:23.044 iops : min= 496, max= 720, avg=681.80, stdev=65.73, samples=20 00:13:23.044 lat (msec) : 20=0.12%, 50=0.41%, 100=90.67%, 250=8.81% 00:13:23.044 cpu : usr=0.91%, sys=1.32%, ctx=8920, majf=0, minf=1 00:13:23.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:23.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:23.044 issued rwts: total=0,6882,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.044 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:23.044 job10: (groupid=0, jobs=1): err= 0: pid=78710: Thu Dec 5 06:38:17 2024 00:13:23.044 write: IOPS=972, BW=243MiB/s (255MB/s)(2444MiB/10054msec); 0 zone resets 00:13:23.044 slat (usec): min=17, max=13037, avg=1019.24, stdev=1779.66 00:13:23.044 clat (msec): min=15, max=108, avg=64.78, stdev=15.25 00:13:23.044 lat (msec): min=15, max=108, avg=65.80, stdev=15.39 00:13:23.044 clat percentiles (msec): 00:13:23.044 | 1.00th=[ 53], 5.00th=[ 54], 10.00th=[ 54], 20.00th=[ 55], 00:13:23.044 | 30.00th=[ 57], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 58], 00:13:23.044 | 70.00th=[ 59], 80.00th=[ 88], 90.00th=[ 93], 95.00th=[ 94], 00:13:23.044 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 102], 99.95th=[ 105], 00:13:23.044 | 99.99th=[ 109] 00:13:23.044 bw ( KiB/s): min=171520, max=291328, per=17.14%, avg=248575.25, stdev=53187.51, samples=20 00:13:23.044 iops : min= 670, max= 1138, avg=970.90, stdev=207.76, samples=20 00:13:23.044 lat (msec) : 20=0.03%, 50=0.25%, 100=99.62%, 250=0.10% 00:13:23.044 cpu : usr=1.26%, sys=1.95%, ctx=12858, majf=0, minf=1 00:13:23.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:23.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:23.044 issued rwts: total=0,9777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.044 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:23.044 00:13:23.044 Run status group 0 (all jobs): 00:13:23.044 WRITE: bw=1416MiB/s (1485MB/s), 76.7MiB/s-243MiB/s (80.5MB/s-255MB/s), io=14.1GiB (15.1GB), run=10054-10195msec 00:13:23.044 00:13:23.044 Disk stats (read/write): 00:13:23.044 nvme0n1: ios=49/6190, merge=0/0, ticks=65/1208173, in_queue=1208238, util=97.86% 00:13:23.044 nvme10n1: ios=49/8518, merge=0/0, ticks=44/1208541, in_queue=1208585, util=97.96% 00:13:23.044 nvme1n1: ios=41/6125, merge=0/0, ticks=41/1207115, in_queue=1207156, util=98.02% 00:13:23.044 nvme2n1: ios=24/13552, merge=0/0, ticks=35/1215521, in_queue=1215556, util=97.96% 00:13:23.044 nvme3n1: ios=0/13542, merge=0/0, ticks=0/1213467, in_queue=1213467, util=97.89% 00:13:23.044 nvme4n1: ios=0/6877, merge=0/0, ticks=0/1208983, in_queue=1208983, util=98.28% 00:13:23.044 nvme5n1: ios=0/6123, merge=0/0, ticks=0/1207428, in_queue=1207428, util=98.32% 00:13:23.044 nvme6n1: ios=0/6174, merge=0/0, ticks=0/1207385, in_queue=1207385, util=98.38% 00:13:23.044 nvme7n1: ios=0/13817, merge=0/0, ticks=0/1214554, in_queue=1214554, util=98.68% 00:13:23.044 nvme8n1: ios=0/13618, merge=0/0, ticks=0/1215994, in_queue=1215994, util=98.87% 00:13:23.044 nvme9n1: ios=0/19404, merge=0/0, ticks=0/1218803, in_queue=1218803, util=99.00% 00:13:23.044 06:38:17 -- target/multiconnection.sh@36 -- # sync 00:13:23.044 06:38:17 -- target/multiconnection.sh@37 -- # seq 1 11 00:13:23.044 06:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:23.044 06:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.044 06:38:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:13:23.044 06:38:17 -- common/autotest_common.sh@1208 -- # local i=0 00:13:23.044 06:38:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:23.044 06:38:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:13:23.044 06:38:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:23.044 06:38:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:13:23.044 06:38:17 -- common/autotest_common.sh@1220 -- # return 0 00:13:23.044 06:38:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.044 06:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.044 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:13:23.044 06:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.044 06:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:23.044 06:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:13:23.044 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:13:23.044 06:38:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:13:23.044 06:38:17 -- common/autotest_common.sh@1208 -- # local i=0 00:13:23.044 06:38:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:23.044 06:38:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:13:23.044 06:38:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:23.044 06:38:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:13:23.044 06:38:17 -- common/autotest_common.sh@1220 -- # return 0 00:13:23.044 06:38:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:23.044 06:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.044 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:13:23.044 06:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.044 06:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:23.044 06:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:13:23.044 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:13:23.044 06:38:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:13:23.044 06:38:17 -- common/autotest_common.sh@1208 -- # local i=0 00:13:23.044 06:38:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:13:23.044 06:38:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:23.044 06:38:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:23.044 06:38:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:13:23.044 06:38:17 -- common/autotest_common.sh@1220 -- # return 0 00:13:23.044 06:38:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:23.044 06:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.044 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:13:23.044 06:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.044 06:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:23.044 06:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:13:23.044 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:13:23.044 06:38:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:13:23.044 06:38:17 -- common/autotest_common.sh@1208 -- # local i=0 00:13:23.044 06:38:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:23.044 06:38:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:13:23.044 06:38:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:23.044 06:38:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:13:23.044 06:38:17 -- common/autotest_common.sh@1220 -- # return 0 00:13:23.044 06:38:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:23.044 06:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.044 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:13:23.044 06:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.044 06:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:23.044 06:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:13:23.044 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:13:23.044 06:38:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:13:23.044 06:38:17 -- common/autotest_common.sh@1208 -- # local i=0 00:13:23.044 06:38:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:23.044 06:38:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:13:23.044 06:38:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:23.044 06:38:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:13:23.044 06:38:17 -- common/autotest_common.sh@1220 -- # return 0 00:13:23.044 06:38:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:13:23.044 06:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.044 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:13:23.044 06:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.044 06:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:23.045 06:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:13:23.045 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:13:23.045 06:38:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:13:23.045 06:38:17 -- common/autotest_common.sh@1208 -- # local i=0 00:13:23.045 06:38:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:23.045 06:38:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:13:23.045 06:38:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:23.045 06:38:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:13:23.045 06:38:17 -- common/autotest_common.sh@1220 -- # return 0 00:13:23.045 06:38:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:13:23.045 06:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.045 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:13:23.045 06:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.045 06:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:23.045 06:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:13:23.045 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:13:23.045 06:38:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:13:23.045 06:38:17 -- common/autotest_common.sh@1208 -- # local i=0 00:13:23.045 06:38:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:23.045 06:38:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:13:23.045 06:38:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:23.045 06:38:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:13:23.045 06:38:17 -- common/autotest_common.sh@1220 -- # return 0 00:13:23.045 06:38:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:13:23.045 06:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.045 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:13:23.045 06:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.045 06:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:23.045 06:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:13:23.045 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:13:23.045 06:38:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:13:23.045 06:38:18 -- common/autotest_common.sh@1208 -- # local i=0 00:13:23.045 06:38:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:23.045 06:38:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:13:23.045 06:38:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:23.045 06:38:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:13:23.045 06:38:18 -- common/autotest_common.sh@1220 -- # return 0 00:13:23.045 06:38:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:13:23.045 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.045 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:13:23.045 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.045 06:38:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:23.045 06:38:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:13:23.045 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:13:23.045 06:38:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:13:23.045 06:38:18 -- common/autotest_common.sh@1208 -- # local i=0 00:13:23.045 06:38:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:23.045 06:38:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:13:23.045 06:38:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:23.045 06:38:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:13:23.045 06:38:18 -- common/autotest_common.sh@1220 -- # return 0 00:13:23.045 06:38:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:13:23.045 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.045 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:13:23.045 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.045 06:38:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:23.045 06:38:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:13:23.045 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:13:23.045 06:38:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:13:23.045 06:38:18 -- common/autotest_common.sh@1208 -- # local i=0 00:13:23.045 06:38:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:23.045 06:38:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:13:23.045 06:38:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:23.045 06:38:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:13:23.045 06:38:18 -- common/autotest_common.sh@1220 -- # return 0 00:13:23.045 06:38:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:13:23.045 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.045 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:13:23.045 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.045 06:38:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:23.045 06:38:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:13:23.045 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:13:23.045 06:38:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:13:23.045 06:38:18 -- common/autotest_common.sh@1208 -- # local i=0 00:13:23.045 06:38:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:23.045 06:38:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:13:23.045 06:38:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:23.045 06:38:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:13:23.045 06:38:18 -- common/autotest_common.sh@1220 -- # return 0 00:13:23.045 06:38:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:13:23.045 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.045 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:13:23.045 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.045 06:38:18 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:13:23.045 06:38:18 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:13:23.045 06:38:18 -- target/multiconnection.sh@47 -- # nvmftestfini 00:13:23.045 06:38:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:23.045 06:38:18 -- nvmf/common.sh@116 -- # sync 00:13:23.045 06:38:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:23.045 06:38:18 -- nvmf/common.sh@119 -- # set +e 00:13:23.045 06:38:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:23.045 06:38:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:23.045 rmmod nvme_tcp 00:13:23.045 rmmod nvme_fabrics 00:13:23.045 rmmod nvme_keyring 00:13:23.045 06:38:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:23.045 06:38:18 -- nvmf/common.sh@123 -- # set -e 00:13:23.045 06:38:18 -- nvmf/common.sh@124 -- # return 0 00:13:23.045 06:38:18 -- nvmf/common.sh@477 -- # '[' -n 78016 ']' 00:13:23.045 06:38:18 -- nvmf/common.sh@478 -- # killprocess 78016 00:13:23.045 06:38:18 -- common/autotest_common.sh@936 -- # '[' -z 78016 ']' 00:13:23.045 06:38:18 -- common/autotest_common.sh@940 -- # kill -0 78016 00:13:23.045 06:38:18 -- common/autotest_common.sh@941 -- # uname 00:13:23.045 06:38:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:23.045 06:38:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78016 00:13:23.045 06:38:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:23.045 06:38:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:23.045 killing process with pid 78016 00:13:23.045 06:38:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78016' 00:13:23.045 06:38:18 -- common/autotest_common.sh@955 -- # kill 78016 00:13:23.045 06:38:18 -- common/autotest_common.sh@960 -- # wait 78016 00:13:23.304 06:38:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:23.304 06:38:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:23.304 06:38:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:23.304 06:38:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.304 06:38:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:23.304 06:38:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.304 06:38:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.304 06:38:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.562 06:38:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:23.562 00:13:23.562 real 0m49.017s 00:13:23.562 user 2m38.057s 00:13:23.562 sys 0m37.001s 00:13:23.562 ************************************ 00:13:23.562 END TEST nvmf_multiconnection 00:13:23.562 ************************************ 00:13:23.562 06:38:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:23.562 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:13:23.562 06:38:18 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:23.562 06:38:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:23.562 06:38:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:23.562 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:13:23.562 ************************************ 00:13:23.562 START TEST nvmf_initiator_timeout 00:13:23.562 ************************************ 00:13:23.562 06:38:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:23.562 * Looking for test storage... 00:13:23.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:23.562 06:38:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:23.562 06:38:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:23.562 06:38:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:23.562 06:38:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:23.562 06:38:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:23.562 06:38:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:23.562 06:38:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:23.562 06:38:18 -- scripts/common.sh@335 -- # IFS=.-: 00:13:23.562 06:38:18 -- scripts/common.sh@335 -- # read -ra ver1 00:13:23.562 06:38:18 -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.562 06:38:18 -- scripts/common.sh@336 -- # read -ra ver2 00:13:23.562 06:38:18 -- scripts/common.sh@337 -- # local 'op=<' 00:13:23.562 06:38:18 -- scripts/common.sh@339 -- # ver1_l=2 00:13:23.562 06:38:18 -- scripts/common.sh@340 -- # ver2_l=1 00:13:23.562 06:38:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:23.562 06:38:18 -- scripts/common.sh@343 -- # case "$op" in 00:13:23.562 06:38:18 -- scripts/common.sh@344 -- # : 1 00:13:23.562 06:38:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:23.562 06:38:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.562 06:38:18 -- scripts/common.sh@364 -- # decimal 1 00:13:23.562 06:38:18 -- scripts/common.sh@352 -- # local d=1 00:13:23.562 06:38:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.562 06:38:18 -- scripts/common.sh@354 -- # echo 1 00:13:23.562 06:38:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:23.562 06:38:18 -- scripts/common.sh@365 -- # decimal 2 00:13:23.562 06:38:19 -- scripts/common.sh@352 -- # local d=2 00:13:23.562 06:38:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.562 06:38:19 -- scripts/common.sh@354 -- # echo 2 00:13:23.562 06:38:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:23.562 06:38:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:23.562 06:38:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:23.562 06:38:19 -- scripts/common.sh@367 -- # return 0 00:13:23.562 06:38:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.562 06:38:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:23.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.562 --rc genhtml_branch_coverage=1 00:13:23.562 --rc genhtml_function_coverage=1 00:13:23.562 --rc genhtml_legend=1 00:13:23.562 --rc geninfo_all_blocks=1 00:13:23.562 --rc geninfo_unexecuted_blocks=1 00:13:23.562 00:13:23.562 ' 00:13:23.562 06:38:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:23.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.562 --rc genhtml_branch_coverage=1 00:13:23.562 --rc genhtml_function_coverage=1 00:13:23.562 --rc genhtml_legend=1 00:13:23.562 --rc geninfo_all_blocks=1 00:13:23.562 --rc geninfo_unexecuted_blocks=1 00:13:23.562 00:13:23.562 ' 00:13:23.562 06:38:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:23.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.562 --rc genhtml_branch_coverage=1 00:13:23.562 --rc genhtml_function_coverage=1 00:13:23.562 --rc genhtml_legend=1 00:13:23.562 --rc geninfo_all_blocks=1 00:13:23.562 --rc geninfo_unexecuted_blocks=1 00:13:23.562 00:13:23.562 ' 00:13:23.562 06:38:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:23.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.562 --rc genhtml_branch_coverage=1 00:13:23.562 --rc genhtml_function_coverage=1 00:13:23.562 --rc genhtml_legend=1 00:13:23.562 --rc geninfo_all_blocks=1 00:13:23.563 --rc geninfo_unexecuted_blocks=1 00:13:23.563 00:13:23.563 ' 00:13:23.563 06:38:19 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:23.563 06:38:19 -- nvmf/common.sh@7 -- # uname -s 00:13:23.563 06:38:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.563 06:38:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.563 06:38:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.563 06:38:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.563 06:38:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.563 06:38:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.563 06:38:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.563 06:38:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.563 06:38:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.563 06:38:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.563 06:38:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:13:23.563 06:38:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:13:23.563 06:38:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.563 06:38:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.563 06:38:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:23.563 06:38:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:23.563 06:38:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.563 06:38:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.563 06:38:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.821 06:38:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.821 06:38:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.821 06:38:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.821 06:38:19 -- paths/export.sh@5 -- # export PATH 00:13:23.821 06:38:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.821 06:38:19 -- nvmf/common.sh@46 -- # : 0 00:13:23.821 06:38:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:23.821 06:38:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:23.821 06:38:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:23.821 06:38:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.822 06:38:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.822 06:38:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:23.822 06:38:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:23.822 06:38:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:23.822 06:38:19 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:23.822 06:38:19 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:23.822 06:38:19 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:13:23.822 06:38:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:23.822 06:38:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.822 06:38:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:23.822 06:38:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:23.822 06:38:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:23.822 06:38:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.822 06:38:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.822 06:38:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.822 06:38:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:23.822 06:38:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:23.822 06:38:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:23.822 06:38:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:23.822 06:38:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:23.822 06:38:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:23.822 06:38:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.822 06:38:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.822 06:38:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:23.822 06:38:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:23.822 06:38:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:23.822 06:38:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:23.822 06:38:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:23.822 06:38:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.822 06:38:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:23.822 06:38:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:23.822 06:38:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:23.822 06:38:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:23.822 06:38:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:23.822 06:38:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:23.822 Cannot find device "nvmf_tgt_br" 00:13:23.822 06:38:19 -- nvmf/common.sh@154 -- # true 00:13:23.822 06:38:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:23.822 Cannot find device "nvmf_tgt_br2" 00:13:23.822 06:38:19 -- nvmf/common.sh@155 -- # true 00:13:23.822 06:38:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:23.822 06:38:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:23.822 Cannot find device "nvmf_tgt_br" 00:13:23.822 06:38:19 -- nvmf/common.sh@157 -- # true 00:13:23.822 06:38:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:23.822 Cannot find device "nvmf_tgt_br2" 00:13:23.822 06:38:19 -- nvmf/common.sh@158 -- # true 00:13:23.822 06:38:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:23.822 06:38:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:23.822 06:38:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:23.822 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:23.822 06:38:19 -- nvmf/common.sh@161 -- # true 00:13:23.822 06:38:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:23.822 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:23.822 06:38:19 -- nvmf/common.sh@162 -- # true 00:13:23.822 06:38:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:23.822 06:38:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:23.822 06:38:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:23.822 06:38:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:23.822 06:38:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:23.822 06:38:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:23.822 06:38:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:23.822 06:38:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:23.822 06:38:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:23.822 06:38:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:23.822 06:38:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:23.822 06:38:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:23.822 06:38:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:23.822 06:38:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:23.822 06:38:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:24.080 06:38:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:24.080 06:38:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:24.080 06:38:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:24.080 06:38:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:24.080 06:38:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:24.081 06:38:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:24.081 06:38:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:24.081 06:38:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:24.081 06:38:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:24.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:13:24.081 00:13:24.081 --- 10.0.0.2 ping statistics --- 00:13:24.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.081 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:13:24.081 06:38:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:24.081 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:24.081 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:13:24.081 00:13:24.081 --- 10.0.0.3 ping statistics --- 00:13:24.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.081 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:13:24.081 06:38:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:24.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:13:24.081 00:13:24.081 --- 10.0.0.1 ping statistics --- 00:13:24.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.081 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:13:24.081 06:38:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.081 06:38:19 -- nvmf/common.sh@421 -- # return 0 00:13:24.081 06:38:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:24.081 06:38:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.081 06:38:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:24.081 06:38:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:24.081 06:38:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.081 06:38:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:24.081 06:38:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:24.081 06:38:19 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:13:24.081 06:38:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:24.081 06:38:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:24.081 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:24.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.081 06:38:19 -- nvmf/common.sh@469 -- # nvmfpid=79084 00:13:24.081 06:38:19 -- nvmf/common.sh@470 -- # waitforlisten 79084 00:13:24.081 06:38:19 -- common/autotest_common.sh@829 -- # '[' -z 79084 ']' 00:13:24.081 06:38:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:24.081 06:38:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.081 06:38:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:24.081 06:38:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.081 06:38:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:24.081 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:24.081 [2024-12-05 06:38:19.443017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:24.081 [2024-12-05 06:38:19.443122] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.340 [2024-12-05 06:38:19.578334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.340 [2024-12-05 06:38:19.613156] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:24.340 [2024-12-05 06:38:19.613522] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.340 [2024-12-05 06:38:19.613687] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.340 [2024-12-05 06:38:19.613840] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.340 [2024-12-05 06:38:19.614146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.340 [2024-12-05 06:38:19.614285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.340 [2024-12-05 06:38:19.614384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.340 [2024-12-05 06:38:19.614384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.340 06:38:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.340 06:38:19 -- common/autotest_common.sh@862 -- # return 0 00:13:24.340 06:38:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:24.340 06:38:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:24.340 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:24.340 06:38:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.340 06:38:19 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:24.340 06:38:19 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:24.340 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.340 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:24.340 Malloc0 00:13:24.340 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.340 06:38:19 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:13:24.341 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.341 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:24.341 Delay0 00:13:24.341 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.341 06:38:19 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:24.341 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.341 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:24.341 [2024-12-05 06:38:19.782196] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.341 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.341 06:38:19 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:24.341 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.341 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:24.341 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.341 06:38:19 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.341 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.341 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:24.599 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.599 06:38:19 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.599 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.599 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:24.599 [2024-12-05 06:38:19.810396] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.599 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.599 06:38:19 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.599 06:38:19 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.599 06:38:19 -- common/autotest_common.sh@1187 -- # local i=0 00:13:24.599 06:38:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.599 06:38:19 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:24.599 06:38:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:26.497 06:38:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:26.497 06:38:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:26.497 06:38:21 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.755 06:38:21 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:26.755 06:38:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.755 06:38:21 -- common/autotest_common.sh@1197 -- # return 0 00:13:26.755 06:38:21 -- target/initiator_timeout.sh@35 -- # fio_pid=79141 00:13:26.755 06:38:21 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:13:26.755 06:38:21 -- target/initiator_timeout.sh@37 -- # sleep 3 00:13:26.755 [global] 00:13:26.755 thread=1 00:13:26.755 invalidate=1 00:13:26.755 rw=write 00:13:26.755 time_based=1 00:13:26.755 runtime=60 00:13:26.755 ioengine=libaio 00:13:26.755 direct=1 00:13:26.755 bs=4096 00:13:26.755 iodepth=1 00:13:26.755 norandommap=0 00:13:26.755 numjobs=1 00:13:26.755 00:13:26.755 verify_dump=1 00:13:26.755 verify_backlog=512 00:13:26.755 verify_state_save=0 00:13:26.755 do_verify=1 00:13:26.755 verify=crc32c-intel 00:13:26.755 [job0] 00:13:26.755 filename=/dev/nvme0n1 00:13:26.755 Could not set queue depth (nvme0n1) 00:13:26.755 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:26.755 fio-3.35 00:13:26.755 Starting 1 thread 00:13:30.085 06:38:24 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:13:30.085 06:38:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.085 06:38:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.085 true 00:13:30.085 06:38:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.085 06:38:24 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:13:30.085 06:38:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.085 06:38:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.085 true 00:13:30.085 06:38:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.085 06:38:24 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:13:30.085 06:38:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.085 06:38:24 -- common/autotest_common.sh@10 -- # set +x 00:13:30.085 true 00:13:30.085 06:38:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.085 06:38:25 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:13:30.085 06:38:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.085 06:38:25 -- common/autotest_common.sh@10 -- # set +x 00:13:30.085 true 00:13:30.085 06:38:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.085 06:38:25 -- target/initiator_timeout.sh@45 -- # sleep 3 00:13:32.618 06:38:28 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:13:32.618 06:38:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.618 06:38:28 -- common/autotest_common.sh@10 -- # set +x 00:13:32.618 true 00:13:32.618 06:38:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.618 06:38:28 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:13:32.618 06:38:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.618 06:38:28 -- common/autotest_common.sh@10 -- # set +x 00:13:32.618 true 00:13:32.618 06:38:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.618 06:38:28 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:13:32.618 06:38:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.618 06:38:28 -- common/autotest_common.sh@10 -- # set +x 00:13:32.618 true 00:13:32.618 06:38:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.618 06:38:28 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:13:32.618 06:38:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.618 06:38:28 -- common/autotest_common.sh@10 -- # set +x 00:13:32.618 true 00:13:32.618 06:38:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.618 06:38:28 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:13:32.618 06:38:28 -- target/initiator_timeout.sh@54 -- # wait 79141 00:14:28.848 00:14:28.848 job0: (groupid=0, jobs=1): err= 0: pid=79162: Thu Dec 5 06:39:22 2024 00:14:28.848 read: IOPS=819, BW=3276KiB/s (3355kB/s)(192MiB/60000msec) 00:14:28.848 slat (nsec): min=9561, max=84499, avg=12704.88, stdev=4355.59 00:14:28.848 clat (usec): min=154, max=839, avg=203.62, stdev=22.92 00:14:28.848 lat (usec): min=164, max=850, avg=216.33, stdev=23.85 00:14:28.848 clat percentiles (usec): 00:14:28.848 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 186], 00:14:28.848 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:14:28.848 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 245], 00:14:28.848 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 306], 99.95th=[ 408], 00:14:28.848 | 99.99th=[ 545] 00:14:28.848 write: IOPS=819, BW=3277KiB/s (3355kB/s)(192MiB/60000msec); 0 zone resets 00:14:28.848 slat (usec): min=12, max=9785, avg=20.45, stdev=56.14 00:14:28.848 clat (usec): min=67, max=40470k, avg=981.07, stdev=182542.82 00:14:28.848 lat (usec): min=132, max=40470k, avg=1001.51, stdev=182542.81 00:14:28.848 clat percentiles (usec): 00:14:28.848 | 1.00th=[ 124], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 139], 00:14:28.848 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 159], 00:14:28.848 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 196], 00:14:28.848 | 99.00th=[ 221], 99.50th=[ 235], 99.90th=[ 441], 99.95th=[ 553], 00:14:28.848 | 99.99th=[ 930] 00:14:28.848 bw ( KiB/s): min= 5760, max=12288, per=100.00%, avg=9872.41, stdev=1494.87, samples=39 00:14:28.848 iops : min= 1440, max= 3072, avg=2468.10, stdev=373.72, samples=39 00:14:28.848 lat (usec) : 100=0.01%, 250=98.23%, 500=1.72%, 750=0.03%, 1000=0.01% 00:14:28.848 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:14:28.848 cpu : usr=0.53%, sys=2.11%, ctx=98302, majf=0, minf=5 00:14:28.848 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:28.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.848 issued rwts: total=49142,49152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:28.848 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:28.848 00:14:28.848 Run status group 0 (all jobs): 00:14:28.848 READ: bw=3276KiB/s (3355kB/s), 3276KiB/s-3276KiB/s (3355kB/s-3355kB/s), io=192MiB (201MB), run=60000-60000msec 00:14:28.848 WRITE: bw=3277KiB/s (3355kB/s), 3277KiB/s-3277KiB/s (3355kB/s-3355kB/s), io=192MiB (201MB), run=60000-60000msec 00:14:28.848 00:14:28.848 Disk stats (read/write): 00:14:28.848 nvme0n1: ios=48959/49152, merge=0/0, ticks=10307/8338, in_queue=18645, util=99.80% 00:14:28.848 06:39:22 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.848 06:39:22 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:28.848 06:39:22 -- common/autotest_common.sh@1208 -- # local i=0 00:14:28.848 06:39:22 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:28.848 06:39:22 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.849 06:39:22 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:28.849 06:39:22 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.849 nvmf hotplug test: fio successful as expected 00:14:28.849 06:39:22 -- common/autotest_common.sh@1220 -- # return 0 00:14:28.849 06:39:22 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:14:28.849 06:39:22 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:14:28.849 06:39:22 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.849 06:39:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.849 06:39:22 -- common/autotest_common.sh@10 -- # set +x 00:14:28.849 06:39:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.849 06:39:22 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:14:28.849 06:39:22 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:14:28.849 06:39:22 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:14:28.849 06:39:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:28.849 06:39:22 -- nvmf/common.sh@116 -- # sync 00:14:28.849 06:39:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:28.849 06:39:22 -- nvmf/common.sh@119 -- # set +e 00:14:28.849 06:39:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:28.849 06:39:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:28.849 rmmod nvme_tcp 00:14:28.849 rmmod nvme_fabrics 00:14:28.849 rmmod nvme_keyring 00:14:28.849 06:39:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:28.849 06:39:22 -- nvmf/common.sh@123 -- # set -e 00:14:28.849 06:39:22 -- nvmf/common.sh@124 -- # return 0 00:14:28.849 06:39:22 -- nvmf/common.sh@477 -- # '[' -n 79084 ']' 00:14:28.849 06:39:22 -- nvmf/common.sh@478 -- # killprocess 79084 00:14:28.849 06:39:22 -- common/autotest_common.sh@936 -- # '[' -z 79084 ']' 00:14:28.849 06:39:22 -- common/autotest_common.sh@940 -- # kill -0 79084 00:14:28.849 06:39:22 -- common/autotest_common.sh@941 -- # uname 00:14:28.849 06:39:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:28.849 06:39:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79084 00:14:28.849 killing process with pid 79084 00:14:28.849 06:39:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:28.849 06:39:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:28.849 06:39:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79084' 00:14:28.849 06:39:22 -- common/autotest_common.sh@955 -- # kill 79084 00:14:28.849 06:39:22 -- common/autotest_common.sh@960 -- # wait 79084 00:14:28.849 06:39:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:28.849 06:39:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:28.849 06:39:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:28.849 06:39:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.849 06:39:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:28.849 06:39:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.849 06:39:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.849 06:39:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.849 06:39:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:28.849 00:14:28.849 real 1m3.777s 00:14:28.849 user 3m50.129s 00:14:28.849 sys 0m21.669s 00:14:28.849 06:39:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:28.849 06:39:22 -- common/autotest_common.sh@10 -- # set +x 00:14:28.849 ************************************ 00:14:28.849 END TEST nvmf_initiator_timeout 00:14:28.849 ************************************ 00:14:28.849 06:39:22 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:14:28.849 06:39:22 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:28.849 06:39:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:28.849 06:39:22 -- common/autotest_common.sh@10 -- # set +x 00:14:28.849 06:39:22 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:28.849 06:39:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:28.849 06:39:22 -- common/autotest_common.sh@10 -- # set +x 00:14:28.849 06:39:22 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:28.849 06:39:22 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:28.849 06:39:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:28.849 06:39:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:28.849 06:39:22 -- common/autotest_common.sh@10 -- # set +x 00:14:28.849 ************************************ 00:14:28.849 START TEST nvmf_identify 00:14:28.849 ************************************ 00:14:28.849 06:39:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:28.849 * Looking for test storage... 00:14:28.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:28.849 06:39:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:28.849 06:39:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:28.849 06:39:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:28.849 06:39:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:28.849 06:39:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:28.849 06:39:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:28.849 06:39:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:28.849 06:39:22 -- scripts/common.sh@335 -- # IFS=.-: 00:14:28.849 06:39:22 -- scripts/common.sh@335 -- # read -ra ver1 00:14:28.849 06:39:22 -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.849 06:39:22 -- scripts/common.sh@336 -- # read -ra ver2 00:14:28.849 06:39:22 -- scripts/common.sh@337 -- # local 'op=<' 00:14:28.849 06:39:22 -- scripts/common.sh@339 -- # ver1_l=2 00:14:28.849 06:39:22 -- scripts/common.sh@340 -- # ver2_l=1 00:14:28.849 06:39:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:28.849 06:39:22 -- scripts/common.sh@343 -- # case "$op" in 00:14:28.849 06:39:22 -- scripts/common.sh@344 -- # : 1 00:14:28.849 06:39:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:28.849 06:39:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.849 06:39:22 -- scripts/common.sh@364 -- # decimal 1 00:14:28.849 06:39:22 -- scripts/common.sh@352 -- # local d=1 00:14:28.849 06:39:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.849 06:39:22 -- scripts/common.sh@354 -- # echo 1 00:14:28.849 06:39:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:28.849 06:39:22 -- scripts/common.sh@365 -- # decimal 2 00:14:28.849 06:39:22 -- scripts/common.sh@352 -- # local d=2 00:14:28.849 06:39:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.849 06:39:22 -- scripts/common.sh@354 -- # echo 2 00:14:28.849 06:39:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:28.849 06:39:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:28.849 06:39:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:28.849 06:39:22 -- scripts/common.sh@367 -- # return 0 00:14:28.849 06:39:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.849 06:39:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:28.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.849 --rc genhtml_branch_coverage=1 00:14:28.849 --rc genhtml_function_coverage=1 00:14:28.849 --rc genhtml_legend=1 00:14:28.849 --rc geninfo_all_blocks=1 00:14:28.849 --rc geninfo_unexecuted_blocks=1 00:14:28.849 00:14:28.849 ' 00:14:28.849 06:39:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:28.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.849 --rc genhtml_branch_coverage=1 00:14:28.849 --rc genhtml_function_coverage=1 00:14:28.849 --rc genhtml_legend=1 00:14:28.849 --rc geninfo_all_blocks=1 00:14:28.849 --rc geninfo_unexecuted_blocks=1 00:14:28.849 00:14:28.849 ' 00:14:28.849 06:39:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:28.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.849 --rc genhtml_branch_coverage=1 00:14:28.849 --rc genhtml_function_coverage=1 00:14:28.849 --rc genhtml_legend=1 00:14:28.849 --rc geninfo_all_blocks=1 00:14:28.849 --rc geninfo_unexecuted_blocks=1 00:14:28.849 00:14:28.849 ' 00:14:28.849 06:39:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:28.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.849 --rc genhtml_branch_coverage=1 00:14:28.849 --rc genhtml_function_coverage=1 00:14:28.849 --rc genhtml_legend=1 00:14:28.849 --rc geninfo_all_blocks=1 00:14:28.849 --rc geninfo_unexecuted_blocks=1 00:14:28.849 00:14:28.849 ' 00:14:28.849 06:39:22 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:28.849 06:39:22 -- nvmf/common.sh@7 -- # uname -s 00:14:28.849 06:39:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.849 06:39:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.849 06:39:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.849 06:39:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.849 06:39:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.849 06:39:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.849 06:39:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.849 06:39:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.849 06:39:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.849 06:39:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.849 06:39:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:14:28.849 06:39:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:14:28.849 06:39:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.849 06:39:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.850 06:39:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:28.850 06:39:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:28.850 06:39:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.850 06:39:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.850 06:39:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.850 06:39:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.850 06:39:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.850 06:39:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.850 06:39:22 -- paths/export.sh@5 -- # export PATH 00:14:28.850 06:39:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.850 06:39:22 -- nvmf/common.sh@46 -- # : 0 00:14:28.850 06:39:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:28.850 06:39:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:28.850 06:39:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:28.850 06:39:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.850 06:39:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.850 06:39:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:28.850 06:39:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:28.850 06:39:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:28.850 06:39:22 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:28.850 06:39:22 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:28.850 06:39:22 -- host/identify.sh@14 -- # nvmftestinit 00:14:28.850 06:39:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:28.850 06:39:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.850 06:39:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:28.850 06:39:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:28.850 06:39:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:28.850 06:39:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.850 06:39:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.850 06:39:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.850 06:39:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:28.850 06:39:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:28.850 06:39:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:28.850 06:39:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:28.850 06:39:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:28.850 06:39:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:28.850 06:39:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.850 06:39:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.850 06:39:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:28.850 06:39:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:28.850 06:39:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:28.850 06:39:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:28.850 06:39:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:28.850 06:39:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.850 06:39:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:28.850 06:39:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:28.850 06:39:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:28.850 06:39:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:28.850 06:39:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:28.850 06:39:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:28.850 Cannot find device "nvmf_tgt_br" 00:14:28.850 06:39:22 -- nvmf/common.sh@154 -- # true 00:14:28.850 06:39:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:28.850 Cannot find device "nvmf_tgt_br2" 00:14:28.850 06:39:22 -- nvmf/common.sh@155 -- # true 00:14:28.850 06:39:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:28.850 06:39:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:28.850 Cannot find device "nvmf_tgt_br" 00:14:28.850 06:39:22 -- nvmf/common.sh@157 -- # true 00:14:28.850 06:39:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:28.850 Cannot find device "nvmf_tgt_br2" 00:14:28.850 06:39:22 -- nvmf/common.sh@158 -- # true 00:14:28.850 06:39:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:28.850 06:39:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:28.850 06:39:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:28.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.850 06:39:23 -- nvmf/common.sh@161 -- # true 00:14:28.850 06:39:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:28.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.850 06:39:23 -- nvmf/common.sh@162 -- # true 00:14:28.850 06:39:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:28.850 06:39:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:28.850 06:39:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:28.850 06:39:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:28.850 06:39:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:28.850 06:39:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:28.850 06:39:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:28.850 06:39:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:28.850 06:39:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:28.850 06:39:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:28.850 06:39:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:28.850 06:39:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:28.850 06:39:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:28.850 06:39:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:28.850 06:39:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:28.850 06:39:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:28.850 06:39:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:28.850 06:39:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:28.850 06:39:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:28.850 06:39:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:28.850 06:39:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:28.850 06:39:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:28.850 06:39:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:28.850 06:39:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:28.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:14:28.850 00:14:28.850 --- 10.0.0.2 ping statistics --- 00:14:28.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.850 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:28.850 06:39:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:28.850 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:28.850 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:14:28.850 00:14:28.850 --- 10.0.0.3 ping statistics --- 00:14:28.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.850 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:28.850 06:39:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:28.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:28.850 00:14:28.850 --- 10.0.0.1 ping statistics --- 00:14:28.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.850 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:28.850 06:39:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.850 06:39:23 -- nvmf/common.sh@421 -- # return 0 00:14:28.850 06:39:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:28.850 06:39:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.850 06:39:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:28.850 06:39:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:28.850 06:39:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.850 06:39:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:28.850 06:39:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:28.850 06:39:23 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:28.850 06:39:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:28.851 06:39:23 -- common/autotest_common.sh@10 -- # set +x 00:14:28.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.851 06:39:23 -- host/identify.sh@19 -- # nvmfpid=80018 00:14:28.851 06:39:23 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:28.851 06:39:23 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:28.851 06:39:23 -- host/identify.sh@23 -- # waitforlisten 80018 00:14:28.851 06:39:23 -- common/autotest_common.sh@829 -- # '[' -z 80018 ']' 00:14:28.851 06:39:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.851 06:39:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.851 06:39:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.851 06:39:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.851 06:39:23 -- common/autotest_common.sh@10 -- # set +x 00:14:28.851 [2024-12-05 06:39:23.288864] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:28.851 [2024-12-05 06:39:23.288964] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.851 [2024-12-05 06:39:23.429093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:28.851 [2024-12-05 06:39:23.470713] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:28.851 [2024-12-05 06:39:23.471162] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.851 [2024-12-05 06:39:23.471350] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.851 [2024-12-05 06:39:23.471596] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.851 [2024-12-05 06:39:23.471851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.851 [2024-12-05 06:39:23.471943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.851 [2024-12-05 06:39:23.472033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.851 [2024-12-05 06:39:23.472032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.851 06:39:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.851 06:39:24 -- common/autotest_common.sh@862 -- # return 0 00:14:28.851 06:39:24 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:28.851 06:39:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.851 06:39:24 -- common/autotest_common.sh@10 -- # set +x 00:14:28.851 [2024-12-05 06:39:24.258516] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.851 06:39:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.851 06:39:24 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:28.851 06:39:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:28.851 06:39:24 -- common/autotest_common.sh@10 -- # set +x 00:14:29.113 06:39:24 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:29.113 06:39:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.113 06:39:24 -- common/autotest_common.sh@10 -- # set +x 00:14:29.113 Malloc0 00:14:29.113 06:39:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.113 06:39:24 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:29.113 06:39:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.113 06:39:24 -- common/autotest_common.sh@10 -- # set +x 00:14:29.113 06:39:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.113 06:39:24 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:29.113 06:39:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.113 06:39:24 -- common/autotest_common.sh@10 -- # set +x 00:14:29.113 06:39:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.113 06:39:24 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.113 06:39:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.113 06:39:24 -- common/autotest_common.sh@10 -- # set +x 00:14:29.113 [2024-12-05 06:39:24.355897] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.113 06:39:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.113 06:39:24 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:29.113 06:39:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.113 06:39:24 -- common/autotest_common.sh@10 -- # set +x 00:14:29.113 06:39:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.113 06:39:24 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:29.113 06:39:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.113 06:39:24 -- common/autotest_common.sh@10 -- # set +x 00:14:29.113 [2024-12-05 06:39:24.371583] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:29.113 [ 00:14:29.113 { 00:14:29.113 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:29.113 "subtype": "Discovery", 00:14:29.113 "listen_addresses": [ 00:14:29.113 { 00:14:29.113 "transport": "TCP", 00:14:29.113 "trtype": "TCP", 00:14:29.113 "adrfam": "IPv4", 00:14:29.113 "traddr": "10.0.0.2", 00:14:29.113 "trsvcid": "4420" 00:14:29.113 } 00:14:29.113 ], 00:14:29.113 "allow_any_host": true, 00:14:29.113 "hosts": [] 00:14:29.113 }, 00:14:29.113 { 00:14:29.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.113 "subtype": "NVMe", 00:14:29.113 "listen_addresses": [ 00:14:29.113 { 00:14:29.113 "transport": "TCP", 00:14:29.113 "trtype": "TCP", 00:14:29.113 "adrfam": "IPv4", 00:14:29.113 "traddr": "10.0.0.2", 00:14:29.113 "trsvcid": "4420" 00:14:29.113 } 00:14:29.113 ], 00:14:29.113 "allow_any_host": true, 00:14:29.113 "hosts": [], 00:14:29.113 "serial_number": "SPDK00000000000001", 00:14:29.113 "model_number": "SPDK bdev Controller", 00:14:29.113 "max_namespaces": 32, 00:14:29.113 "min_cntlid": 1, 00:14:29.113 "max_cntlid": 65519, 00:14:29.113 "namespaces": [ 00:14:29.113 { 00:14:29.113 "nsid": 1, 00:14:29.113 "bdev_name": "Malloc0", 00:14:29.113 "name": "Malloc0", 00:14:29.113 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:29.113 "eui64": "ABCDEF0123456789", 00:14:29.113 "uuid": "a0847a6d-f244-4c9f-afca-1e49f94fe0a4" 00:14:29.113 } 00:14:29.113 ] 00:14:29.113 } 00:14:29.113 ] 00:14:29.113 06:39:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.113 06:39:24 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:29.113 [2024-12-05 06:39:24.410241] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:29.113 [2024-12-05 06:39:24.410433] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80053 ] 00:14:29.113 [2024-12-05 06:39:24.542835] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:29.113 [2024-12-05 06:39:24.542904] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:29.113 [2024-12-05 06:39:24.542911] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:29.113 [2024-12-05 06:39:24.542921] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:29.113 [2024-12-05 06:39:24.542932] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:29.113 [2024-12-05 06:39:24.543087] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:29.114 [2024-12-05 06:39:24.543147] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18a5540 0 00:14:29.114 [2024-12-05 06:39:24.556405] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:29.114 [2024-12-05 06:39:24.556429] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:29.114 [2024-12-05 06:39:24.556451] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:29.114 [2024-12-05 06:39:24.556455] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:29.114 [2024-12-05 06:39:24.556512] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.556520] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.556524] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18a5540) 00:14:29.114 [2024-12-05 06:39:24.556538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:29.114 [2024-12-05 06:39:24.556570] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de220, cid 0, qid 0 00:14:29.114 [2024-12-05 06:39:24.564374] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.114 [2024-12-05 06:39:24.564413] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.114 [2024-12-05 06:39:24.564433] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.564439] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de220) on tqpair=0x18a5540 00:14:29.114 [2024-12-05 06:39:24.564454] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:29.114 [2024-12-05 06:39:24.564462] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:29.114 [2024-12-05 06:39:24.564469] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:29.114 [2024-12-05 06:39:24.564484] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.564489] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.564493] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18a5540) 00:14:29.114 [2024-12-05 06:39:24.564503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.114 [2024-12-05 06:39:24.564530] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de220, cid 0, qid 0 00:14:29.114 [2024-12-05 06:39:24.564599] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.114 [2024-12-05 06:39:24.564607] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.114 [2024-12-05 06:39:24.564611] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.564616] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de220) on tqpair=0x18a5540 00:14:29.114 [2024-12-05 06:39:24.564622] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:29.114 [2024-12-05 06:39:24.564631] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:29.114 [2024-12-05 06:39:24.564639] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.564643] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.564647] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18a5540) 00:14:29.114 [2024-12-05 06:39:24.564655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.114 [2024-12-05 06:39:24.564675] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de220, cid 0, qid 0 00:14:29.114 [2024-12-05 06:39:24.564736] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.114 [2024-12-05 06:39:24.564743] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.114 [2024-12-05 06:39:24.564747] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.564751] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de220) on tqpair=0x18a5540 00:14:29.114 [2024-12-05 06:39:24.564758] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:29.114 [2024-12-05 06:39:24.564767] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:29.114 [2024-12-05 06:39:24.564774] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.564779] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.564782] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18a5540) 00:14:29.114 [2024-12-05 06:39:24.564790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.114 [2024-12-05 06:39:24.564808] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de220, cid 0, qid 0 00:14:29.114 [2024-12-05 06:39:24.564864] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.114 [2024-12-05 06:39:24.564871] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.114 [2024-12-05 06:39:24.564875] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.564879] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de220) on tqpair=0x18a5540 00:14:29.114 [2024-12-05 06:39:24.564887] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:29.114 [2024-12-05 06:39:24.564897] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.564902] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.564906] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18a5540) 00:14:29.114 [2024-12-05 06:39:24.564913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.114 [2024-12-05 06:39:24.564930] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de220, cid 0, qid 0 00:14:29.114 [2024-12-05 06:39:24.564997] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.114 [2024-12-05 06:39:24.565005] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.114 [2024-12-05 06:39:24.565008] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.565012] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de220) on tqpair=0x18a5540 00:14:29.114 [2024-12-05 06:39:24.565019] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:29.114 [2024-12-05 06:39:24.565024] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:29.114 [2024-12-05 06:39:24.565032] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:29.114 [2024-12-05 06:39:24.565138] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:29.114 [2024-12-05 06:39:24.565143] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:29.114 [2024-12-05 06:39:24.565152] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.565156] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.565160] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18a5540) 00:14:29.114 [2024-12-05 06:39:24.565167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.114 [2024-12-05 06:39:24.565185] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de220, cid 0, qid 0 00:14:29.114 [2024-12-05 06:39:24.565249] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.114 [2024-12-05 06:39:24.565256] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.114 [2024-12-05 06:39:24.565259] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.565263] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de220) on tqpair=0x18a5540 00:14:29.114 [2024-12-05 06:39:24.565269] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:29.114 [2024-12-05 06:39:24.565279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.565284] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.565288] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18a5540) 00:14:29.114 [2024-12-05 06:39:24.565295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.114 [2024-12-05 06:39:24.565312] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de220, cid 0, qid 0 00:14:29.114 [2024-12-05 06:39:24.565374] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.114 [2024-12-05 06:39:24.565383] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.114 [2024-12-05 06:39:24.565387] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.565391] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de220) on tqpair=0x18a5540 00:14:29.114 [2024-12-05 06:39:24.565396] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:29.114 [2024-12-05 06:39:24.565401] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:29.114 [2024-12-05 06:39:24.565410] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:29.114 [2024-12-05 06:39:24.565425] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:29.114 [2024-12-05 06:39:24.565435] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.565439] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.565443] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18a5540) 00:14:29.114 [2024-12-05 06:39:24.565451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.114 [2024-12-05 06:39:24.565471] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de220, cid 0, qid 0 00:14:29.114 [2024-12-05 06:39:24.565570] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:29.114 [2024-12-05 06:39:24.565577] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:29.114 [2024-12-05 06:39:24.565597] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.565602] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18a5540): datao=0, datal=4096, cccid=0 00:14:29.114 [2024-12-05 06:39:24.565606] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18de220) on tqpair(0x18a5540): expected_datao=0, payload_size=4096 00:14:29.114 [2024-12-05 06:39:24.565615] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.565620] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:29.114 [2024-12-05 06:39:24.565630] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.114 [2024-12-05 06:39:24.565636] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.115 [2024-12-05 06:39:24.565640] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.565644] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de220) on tqpair=0x18a5540 00:14:29.115 [2024-12-05 06:39:24.565653] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:29.115 [2024-12-05 06:39:24.565659] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:29.115 [2024-12-05 06:39:24.565664] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:29.115 [2024-12-05 06:39:24.565670] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:29.115 [2024-12-05 06:39:24.565675] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:29.115 [2024-12-05 06:39:24.565680] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:29.115 [2024-12-05 06:39:24.565693] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:29.115 [2024-12-05 06:39:24.565702] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.565706] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.565710] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18a5540) 00:14:29.115 [2024-12-05 06:39:24.565718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:29.115 [2024-12-05 06:39:24.565739] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de220, cid 0, qid 0 00:14:29.115 [2024-12-05 06:39:24.565823] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.115 [2024-12-05 06:39:24.565831] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.115 [2024-12-05 06:39:24.565835] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.565839] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de220) on tqpair=0x18a5540 00:14:29.115 [2024-12-05 06:39:24.565848] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.565853] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.565857] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18a5540) 00:14:29.115 [2024-12-05 06:39:24.565871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.115 [2024-12-05 06:39:24.565878] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.565882] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.565886] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18a5540) 00:14:29.115 [2024-12-05 06:39:24.565892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.115 [2024-12-05 06:39:24.565899] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.565902] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.565906] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18a5540) 00:14:29.115 [2024-12-05 06:39:24.565913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.115 [2024-12-05 06:39:24.565919] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.565923] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.565927] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18a5540) 00:14:29.115 [2024-12-05 06:39:24.565933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.115 [2024-12-05 06:39:24.565939] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:29.115 [2024-12-05 06:39:24.565952] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:29.115 [2024-12-05 06:39:24.565960] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.565964] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.565968] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18a5540) 00:14:29.115 [2024-12-05 06:39:24.565976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.115 [2024-12-05 06:39:24.565997] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de220, cid 0, qid 0 00:14:29.115 [2024-12-05 06:39:24.566005] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de380, cid 1, qid 0 00:14:29.115 [2024-12-05 06:39:24.566010] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de4e0, cid 2, qid 0 00:14:29.115 [2024-12-05 06:39:24.566015] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de640, cid 3, qid 0 00:14:29.115 [2024-12-05 06:39:24.566020] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de7a0, cid 4, qid 0 00:14:29.115 [2024-12-05 06:39:24.566170] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.115 [2024-12-05 06:39:24.566177] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.115 [2024-12-05 06:39:24.566181] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566184] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de7a0) on tqpair=0x18a5540 00:14:29.115 [2024-12-05 06:39:24.566191] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:29.115 [2024-12-05 06:39:24.566196] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:29.115 [2024-12-05 06:39:24.566207] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566211] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566215] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18a5540) 00:14:29.115 [2024-12-05 06:39:24.566222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.115 [2024-12-05 06:39:24.566240] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de7a0, cid 4, qid 0 00:14:29.115 [2024-12-05 06:39:24.566313] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:29.115 [2024-12-05 06:39:24.566320] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:29.115 [2024-12-05 06:39:24.566324] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566344] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18a5540): datao=0, datal=4096, cccid=4 00:14:29.115 [2024-12-05 06:39:24.566349] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18de7a0) on tqpair(0x18a5540): expected_datao=0, payload_size=4096 00:14:29.115 [2024-12-05 06:39:24.566357] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566361] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566370] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.115 [2024-12-05 06:39:24.566376] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.115 [2024-12-05 06:39:24.566380] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566384] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de7a0) on tqpair=0x18a5540 00:14:29.115 [2024-12-05 06:39:24.566412] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:29.115 [2024-12-05 06:39:24.566438] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566444] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566448] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18a5540) 00:14:29.115 [2024-12-05 06:39:24.566456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.115 [2024-12-05 06:39:24.566464] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566467] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566471] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18a5540) 00:14:29.115 [2024-12-05 06:39:24.566478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.115 [2024-12-05 06:39:24.566504] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de7a0, cid 4, qid 0 00:14:29.115 [2024-12-05 06:39:24.566512] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de900, cid 5, qid 0 00:14:29.115 [2024-12-05 06:39:24.566640] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:29.115 [2024-12-05 06:39:24.566648] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:29.115 [2024-12-05 06:39:24.566652] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566656] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18a5540): datao=0, datal=1024, cccid=4 00:14:29.115 [2024-12-05 06:39:24.566660] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18de7a0) on tqpair(0x18a5540): expected_datao=0, payload_size=1024 00:14:29.115 [2024-12-05 06:39:24.566668] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566672] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566678] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.115 [2024-12-05 06:39:24.566684] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.115 [2024-12-05 06:39:24.566688] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566692] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de900) on tqpair=0x18a5540 00:14:29.115 [2024-12-05 06:39:24.566727] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.115 [2024-12-05 06:39:24.566750] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.115 [2024-12-05 06:39:24.566754] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566758] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de7a0) on tqpair=0x18a5540 00:14:29.115 [2024-12-05 06:39:24.566790] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566796] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.115 [2024-12-05 06:39:24.566800] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18a5540) 00:14:29.115 [2024-12-05 06:39:24.566808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.115 [2024-12-05 06:39:24.566834] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de7a0, cid 4, qid 0 00:14:29.115 [2024-12-05 06:39:24.566915] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:29.115 [2024-12-05 06:39:24.566923] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:29.115 [2024-12-05 06:39:24.566927] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:29.116 [2024-12-05 06:39:24.566931] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18a5540): datao=0, datal=3072, cccid=4 00:14:29.116 [2024-12-05 06:39:24.566936] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18de7a0) on tqpair(0x18a5540): expected_datao=0, payload_size=3072 00:14:29.116 [2024-12-05 06:39:24.566944] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:29.116 [2024-12-05 06:39:24.566948] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:29.116 [2024-12-05 06:39:24.566962] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.116 [2024-12-05 06:39:24.566969] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.116 [2024-12-05 06:39:24.566973] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.116 [2024-12-05 06:39:24.566977] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de7a0) on tqpair=0x18a5540 00:14:29.116 [2024-12-05 06:39:24.566988] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.116 [2024-12-05 06:39:24.566992] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.116 [2024-12-05 06:39:24.566996] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18a5540) 00:14:29.116 [2024-12-05 06:39:24.567004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.116 [2024-12-05 06:39:24.567032] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de7a0, cid 4, qid 0 00:14:29.116 [2024-12-05 06:39:24.567116] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:29.116 [2024-12-05 06:39:24.567123] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:29.116 [2024-12-05 06:39:24.567127] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:29.116 [2024-12-05 06:39:24.567131] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18a5540): datao=0, datal=8, cccid=4 00:14:29.116 [2024-12-05 06:39:24.567137] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18de7a0) on tqpair(0x18a5540): expected_datao=0, payload_size=8 00:14:29.116 [2024-12-05 06:39:24.567144] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:29.116 [2024-12-05 06:39:24.567149] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:29.116 [2024-12-05 06:39:24.567164] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.116 [2024-12-05 06:39:24.567172] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.116 [2024-12-05 06:39:24.567176] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.116 [2024-12-05 06:39:24.567192] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de7a0) on tqpair=0x18a5540 00:14:29.116 ===================================================== 00:14:29.116 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:29.116 ===================================================== 00:14:29.116 Controller Capabilities/Features 00:14:29.116 ================================ 00:14:29.116 Vendor ID: 0000 00:14:29.116 Subsystem Vendor ID: 0000 00:14:29.116 Serial Number: .................... 00:14:29.116 Model Number: ........................................ 00:14:29.116 Firmware Version: 24.01.1 00:14:29.116 Recommended Arb Burst: 0 00:14:29.116 IEEE OUI Identifier: 00 00 00 00:14:29.116 Multi-path I/O 00:14:29.116 May have multiple subsystem ports: No 00:14:29.116 May have multiple controllers: No 00:14:29.116 Associated with SR-IOV VF: No 00:14:29.116 Max Data Transfer Size: 131072 00:14:29.116 Max Number of Namespaces: 0 00:14:29.116 Max Number of I/O Queues: 1024 00:14:29.116 NVMe Specification Version (VS): 1.3 00:14:29.116 NVMe Specification Version (Identify): 1.3 00:14:29.116 Maximum Queue Entries: 128 00:14:29.116 Contiguous Queues Required: Yes 00:14:29.116 Arbitration Mechanisms Supported 00:14:29.116 Weighted Round Robin: Not Supported 00:14:29.116 Vendor Specific: Not Supported 00:14:29.116 Reset Timeout: 15000 ms 00:14:29.116 Doorbell Stride: 4 bytes 00:14:29.116 NVM Subsystem Reset: Not Supported 00:14:29.116 Command Sets Supported 00:14:29.116 NVM Command Set: Supported 00:14:29.116 Boot Partition: Not Supported 00:14:29.116 Memory Page Size Minimum: 4096 bytes 00:14:29.116 Memory Page Size Maximum: 4096 bytes 00:14:29.116 Persistent Memory Region: Not Supported 00:14:29.116 Optional Asynchronous Events Supported 00:14:29.116 Namespace Attribute Notices: Not Supported 00:14:29.116 Firmware Activation Notices: Not Supported 00:14:29.116 ANA Change Notices: Not Supported 00:14:29.116 PLE Aggregate Log Change Notices: Not Supported 00:14:29.116 LBA Status Info Alert Notices: Not Supported 00:14:29.116 EGE Aggregate Log Change Notices: Not Supported 00:14:29.116 Normal NVM Subsystem Shutdown event: Not Supported 00:14:29.116 Zone Descriptor Change Notices: Not Supported 00:14:29.116 Discovery Log Change Notices: Supported 00:14:29.116 Controller Attributes 00:14:29.116 128-bit Host Identifier: Not Supported 00:14:29.116 Non-Operational Permissive Mode: Not Supported 00:14:29.116 NVM Sets: Not Supported 00:14:29.116 Read Recovery Levels: Not Supported 00:14:29.116 Endurance Groups: Not Supported 00:14:29.116 Predictable Latency Mode: Not Supported 00:14:29.116 Traffic Based Keep ALive: Not Supported 00:14:29.116 Namespace Granularity: Not Supported 00:14:29.116 SQ Associations: Not Supported 00:14:29.116 UUID List: Not Supported 00:14:29.116 Multi-Domain Subsystem: Not Supported 00:14:29.116 Fixed Capacity Management: Not Supported 00:14:29.116 Variable Capacity Management: Not Supported 00:14:29.116 Delete Endurance Group: Not Supported 00:14:29.116 Delete NVM Set: Not Supported 00:14:29.116 Extended LBA Formats Supported: Not Supported 00:14:29.116 Flexible Data Placement Supported: Not Supported 00:14:29.116 00:14:29.116 Controller Memory Buffer Support 00:14:29.116 ================================ 00:14:29.116 Supported: No 00:14:29.116 00:14:29.116 Persistent Memory Region Support 00:14:29.116 ================================ 00:14:29.116 Supported: No 00:14:29.116 00:14:29.116 Admin Command Set Attributes 00:14:29.116 ============================ 00:14:29.116 Security Send/Receive: Not Supported 00:14:29.116 Format NVM: Not Supported 00:14:29.116 Firmware Activate/Download: Not Supported 00:14:29.116 Namespace Management: Not Supported 00:14:29.116 Device Self-Test: Not Supported 00:14:29.116 Directives: Not Supported 00:14:29.116 NVMe-MI: Not Supported 00:14:29.116 Virtualization Management: Not Supported 00:14:29.116 Doorbell Buffer Config: Not Supported 00:14:29.116 Get LBA Status Capability: Not Supported 00:14:29.116 Command & Feature Lockdown Capability: Not Supported 00:14:29.116 Abort Command Limit: 1 00:14:29.116 Async Event Request Limit: 4 00:14:29.116 Number of Firmware Slots: N/A 00:14:29.116 Firmware Slot 1 Read-Only: N/A 00:14:29.116 Firmware Activation Without Reset: N/A 00:14:29.116 Multiple Update Detection Support: N/A 00:14:29.116 Firmware Update Granularity: No Information Provided 00:14:29.116 Per-Namespace SMART Log: No 00:14:29.116 Asymmetric Namespace Access Log Page: Not Supported 00:14:29.116 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:29.116 Command Effects Log Page: Not Supported 00:14:29.116 Get Log Page Extended Data: Supported 00:14:29.116 Telemetry Log Pages: Not Supported 00:14:29.116 Persistent Event Log Pages: Not Supported 00:14:29.116 Supported Log Pages Log Page: May Support 00:14:29.116 Commands Supported & Effects Log Page: Not Supported 00:14:29.116 Feature Identifiers & Effects Log Page:May Support 00:14:29.116 NVMe-MI Commands & Effects Log Page: May Support 00:14:29.116 Data Area 4 for Telemetry Log: Not Supported 00:14:29.116 Error Log Page Entries Supported: 128 00:14:29.116 Keep Alive: Not Supported 00:14:29.116 00:14:29.116 NVM Command Set Attributes 00:14:29.116 ========================== 00:14:29.116 Submission Queue Entry Size 00:14:29.116 Max: 1 00:14:29.116 Min: 1 00:14:29.116 Completion Queue Entry Size 00:14:29.116 Max: 1 00:14:29.116 Min: 1 00:14:29.116 Number of Namespaces: 0 00:14:29.116 Compare Command: Not Supported 00:14:29.116 Write Uncorrectable Command: Not Supported 00:14:29.116 Dataset Management Command: Not Supported 00:14:29.116 Write Zeroes Command: Not Supported 00:14:29.116 Set Features Save Field: Not Supported 00:14:29.116 Reservations: Not Supported 00:14:29.116 Timestamp: Not Supported 00:14:29.116 Copy: Not Supported 00:14:29.116 Volatile Write Cache: Not Present 00:14:29.116 Atomic Write Unit (Normal): 1 00:14:29.116 Atomic Write Unit (PFail): 1 00:14:29.116 Atomic Compare & Write Unit: 1 00:14:29.116 Fused Compare & Write: Supported 00:14:29.116 Scatter-Gather List 00:14:29.116 SGL Command Set: Supported 00:14:29.116 SGL Keyed: Supported 00:14:29.116 SGL Bit Bucket Descriptor: Not Supported 00:14:29.116 SGL Metadata Pointer: Not Supported 00:14:29.116 Oversized SGL: Not Supported 00:14:29.116 SGL Metadata Address: Not Supported 00:14:29.116 SGL Offset: Supported 00:14:29.116 Transport SGL Data Block: Not Supported 00:14:29.116 Replay Protected Memory Block: Not Supported 00:14:29.116 00:14:29.116 Firmware Slot Information 00:14:29.116 ========================= 00:14:29.116 Active slot: 0 00:14:29.116 00:14:29.116 00:14:29.116 Error Log 00:14:29.116 ========= 00:14:29.116 00:14:29.116 Active Namespaces 00:14:29.116 ================= 00:14:29.116 Discovery Log Page 00:14:29.116 ================== 00:14:29.116 Generation Counter: 2 00:14:29.116 Number of Records: 2 00:14:29.116 Record Format: 0 00:14:29.117 00:14:29.117 Discovery Log Entry 0 00:14:29.117 ---------------------- 00:14:29.117 Transport Type: 3 (TCP) 00:14:29.117 Address Family: 1 (IPv4) 00:14:29.117 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:29.117 Entry Flags: 00:14:29.117 Duplicate Returned Information: 1 00:14:29.117 Explicit Persistent Connection Support for Discovery: 1 00:14:29.117 Transport Requirements: 00:14:29.117 Secure Channel: Not Required 00:14:29.117 Port ID: 0 (0x0000) 00:14:29.117 Controller ID: 65535 (0xffff) 00:14:29.117 Admin Max SQ Size: 128 00:14:29.117 Transport Service Identifier: 4420 00:14:29.117 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:29.117 Transport Address: 10.0.0.2 00:14:29.117 Discovery Log Entry 1 00:14:29.117 ---------------------- 00:14:29.117 Transport Type: 3 (TCP) 00:14:29.117 Address Family: 1 (IPv4) 00:14:29.117 Subsystem Type: 2 (NVM Subsystem) 00:14:29.117 Entry Flags: 00:14:29.117 Duplicate Returned Information: 0 00:14:29.117 Explicit Persistent Connection Support for Discovery: 0 00:14:29.117 Transport Requirements: 00:14:29.117 Secure Channel: Not Required 00:14:29.117 Port ID: 0 (0x0000) 00:14:29.117 Controller ID: 65535 (0xffff) 00:14:29.117 Admin Max SQ Size: 128 00:14:29.117 Transport Service Identifier: 4420 00:14:29.117 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:29.117 Transport Address: 10.0.0.2 [2024-12-05 06:39:24.567327] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:29.117 [2024-12-05 06:39:24.567361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.117 [2024-12-05 06:39:24.567370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.117 [2024-12-05 06:39:24.567377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.117 [2024-12-05 06:39:24.567383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.117 [2024-12-05 06:39:24.567394] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.567398] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.567402] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18a5540) 00:14:29.117 [2024-12-05 06:39:24.567410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.117 [2024-12-05 06:39:24.567435] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de640, cid 3, qid 0 00:14:29.117 [2024-12-05 06:39:24.567496] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.117 [2024-12-05 06:39:24.567504] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.117 [2024-12-05 06:39:24.567508] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.567512] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de640) on tqpair=0x18a5540 00:14:29.117 [2024-12-05 06:39:24.567522] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.567526] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.567530] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18a5540) 00:14:29.117 [2024-12-05 06:39:24.567538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.117 [2024-12-05 06:39:24.567560] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de640, cid 3, qid 0 00:14:29.117 [2024-12-05 06:39:24.567650] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.117 [2024-12-05 06:39:24.567671] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.117 [2024-12-05 06:39:24.567675] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.567679] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de640) on tqpair=0x18a5540 00:14:29.117 [2024-12-05 06:39:24.567685] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:29.117 [2024-12-05 06:39:24.567690] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:29.117 [2024-12-05 06:39:24.567700] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.567705] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.567708] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18a5540) 00:14:29.117 [2024-12-05 06:39:24.567715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.117 [2024-12-05 06:39:24.567732] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de640, cid 3, qid 0 00:14:29.117 [2024-12-05 06:39:24.567800] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.117 [2024-12-05 06:39:24.567807] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.117 [2024-12-05 06:39:24.567811] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.567815] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de640) on tqpair=0x18a5540 00:14:29.117 [2024-12-05 06:39:24.567827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.567832] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.567836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18a5540) 00:14:29.117 [2024-12-05 06:39:24.567843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.117 [2024-12-05 06:39:24.567860] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de640, cid 3, qid 0 00:14:29.117 [2024-12-05 06:39:24.567930] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.117 [2024-12-05 06:39:24.567937] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.117 [2024-12-05 06:39:24.567941] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.567945] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de640) on tqpair=0x18a5540 00:14:29.117 [2024-12-05 06:39:24.567956] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.567961] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.567965] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18a5540) 00:14:29.117 [2024-12-05 06:39:24.567972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.117 [2024-12-05 06:39:24.567990] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de640, cid 3, qid 0 00:14:29.117 [2024-12-05 06:39:24.568062] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.117 [2024-12-05 06:39:24.568069] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.117 [2024-12-05 06:39:24.568073] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.568077] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de640) on tqpair=0x18a5540 00:14:29.117 [2024-12-05 06:39:24.568088] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.568093] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.568112] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18a5540) 00:14:29.117 [2024-12-05 06:39:24.568119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.117 [2024-12-05 06:39:24.568136] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de640, cid 3, qid 0 00:14:29.117 [2024-12-05 06:39:24.568191] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.117 [2024-12-05 06:39:24.568198] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.117 [2024-12-05 06:39:24.568202] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.568206] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de640) on tqpair=0x18a5540 00:14:29.117 [2024-12-05 06:39:24.568217] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.568222] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.568225] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18a5540) 00:14:29.117 [2024-12-05 06:39:24.568233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.117 [2024-12-05 06:39:24.568250] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de640, cid 3, qid 0 00:14:29.117 [2024-12-05 06:39:24.568305] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.117 [2024-12-05 06:39:24.568312] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.117 [2024-12-05 06:39:24.568315] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.568319] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de640) on tqpair=0x18a5540 00:14:29.117 [2024-12-05 06:39:24.568330] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.568335] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.568339] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18a5540) 00:14:29.117 [2024-12-05 06:39:24.568346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.117 [2024-12-05 06:39:24.568362] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de640, cid 3, qid 0 00:14:29.117 [2024-12-05 06:39:24.572406] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.117 [2024-12-05 06:39:24.572418] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.117 [2024-12-05 06:39:24.572423] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.572427] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de640) on tqpair=0x18a5540 00:14:29.117 [2024-12-05 06:39:24.572443] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.572448] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.117 [2024-12-05 06:39:24.572452] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18a5540) 00:14:29.117 [2024-12-05 06:39:24.572461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.117 [2024-12-05 06:39:24.572488] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de640, cid 3, qid 0 00:14:29.117 [2024-12-05 06:39:24.572558] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.118 [2024-12-05 06:39:24.572565] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.118 [2024-12-05 06:39:24.572570] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.118 [2024-12-05 06:39:24.572574] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18de640) on tqpair=0x18a5540 00:14:29.118 [2024-12-05 06:39:24.572584] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:14:29.383 00:14:29.383 06:39:24 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:29.383 [2024-12-05 06:39:24.607792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:29.383 [2024-12-05 06:39:24.607840] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80055 ] 00:14:29.383 [2024-12-05 06:39:24.746891] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:29.383 [2024-12-05 06:39:24.746963] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:29.383 [2024-12-05 06:39:24.746970] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:29.383 [2024-12-05 06:39:24.746981] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:29.383 [2024-12-05 06:39:24.746992] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:29.383 [2024-12-05 06:39:24.747101] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:29.383 [2024-12-05 06:39:24.747167] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa82540 0 00:14:29.383 [2024-12-05 06:39:24.752374] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:29.383 [2024-12-05 06:39:24.752398] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:29.383 [2024-12-05 06:39:24.752420] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:29.383 [2024-12-05 06:39:24.752424] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:29.383 [2024-12-05 06:39:24.752464] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.383 [2024-12-05 06:39:24.752472] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.383 [2024-12-05 06:39:24.752476] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa82540) 00:14:29.383 [2024-12-05 06:39:24.752489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:29.383 [2024-12-05 06:39:24.752519] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb220, cid 0, qid 0 00:14:29.383 [2024-12-05 06:39:24.760389] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.383 [2024-12-05 06:39:24.760409] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.383 [2024-12-05 06:39:24.760431] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.383 [2024-12-05 06:39:24.760436] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb220) on tqpair=0xa82540 00:14:29.383 [2024-12-05 06:39:24.760450] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:29.383 [2024-12-05 06:39:24.760458] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:29.383 [2024-12-05 06:39:24.760465] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:29.383 [2024-12-05 06:39:24.760480] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.383 [2024-12-05 06:39:24.760485] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.383 [2024-12-05 06:39:24.760489] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa82540) 00:14:29.383 [2024-12-05 06:39:24.760499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.383 [2024-12-05 06:39:24.760525] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb220, cid 0, qid 0 00:14:29.383 [2024-12-05 06:39:24.760581] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.383 [2024-12-05 06:39:24.760588] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.383 [2024-12-05 06:39:24.760598] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.383 [2024-12-05 06:39:24.760602] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb220) on tqpair=0xa82540 00:14:29.383 [2024-12-05 06:39:24.760608] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:29.383 [2024-12-05 06:39:24.760615] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:29.383 [2024-12-05 06:39:24.760623] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.383 [2024-12-05 06:39:24.760627] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.383 [2024-12-05 06:39:24.760631] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa82540) 00:14:29.383 [2024-12-05 06:39:24.760639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.383 [2024-12-05 06:39:24.760657] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb220, cid 0, qid 0 00:14:29.383 [2024-12-05 06:39:24.760707] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.383 [2024-12-05 06:39:24.760714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.383 [2024-12-05 06:39:24.760718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.383 [2024-12-05 06:39:24.760722] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb220) on tqpair=0xa82540 00:14:29.383 [2024-12-05 06:39:24.760727] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:29.383 [2024-12-05 06:39:24.760736] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:29.383 [2024-12-05 06:39:24.760744] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.760748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.760752] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa82540) 00:14:29.384 [2024-12-05 06:39:24.760759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.384 [2024-12-05 06:39:24.760792] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb220, cid 0, qid 0 00:14:29.384 [2024-12-05 06:39:24.760844] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.384 [2024-12-05 06:39:24.760851] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.384 [2024-12-05 06:39:24.760855] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.760859] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb220) on tqpair=0xa82540 00:14:29.384 [2024-12-05 06:39:24.760865] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:29.384 [2024-12-05 06:39:24.760876] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.760881] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.760885] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa82540) 00:14:29.384 [2024-12-05 06:39:24.760892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.384 [2024-12-05 06:39:24.760909] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb220, cid 0, qid 0 00:14:29.384 [2024-12-05 06:39:24.760954] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.384 [2024-12-05 06:39:24.760961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.384 [2024-12-05 06:39:24.760965] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.760970] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb220) on tqpair=0xa82540 00:14:29.384 [2024-12-05 06:39:24.760975] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:29.384 [2024-12-05 06:39:24.760980] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:29.384 [2024-12-05 06:39:24.760988] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:29.384 [2024-12-05 06:39:24.761094] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:29.384 [2024-12-05 06:39:24.761107] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:29.384 [2024-12-05 06:39:24.761118] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761123] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761127] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa82540) 00:14:29.384 [2024-12-05 06:39:24.761135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.384 [2024-12-05 06:39:24.761154] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb220, cid 0, qid 0 00:14:29.384 [2024-12-05 06:39:24.761223] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.384 [2024-12-05 06:39:24.761238] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.384 [2024-12-05 06:39:24.761243] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761247] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb220) on tqpair=0xa82540 00:14:29.384 [2024-12-05 06:39:24.761253] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:29.384 [2024-12-05 06:39:24.761264] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761269] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761273] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa82540) 00:14:29.384 [2024-12-05 06:39:24.761280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.384 [2024-12-05 06:39:24.761298] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb220, cid 0, qid 0 00:14:29.384 [2024-12-05 06:39:24.761346] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.384 [2024-12-05 06:39:24.761354] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.384 [2024-12-05 06:39:24.761358] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761362] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb220) on tqpair=0xa82540 00:14:29.384 [2024-12-05 06:39:24.761367] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:29.384 [2024-12-05 06:39:24.761373] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:29.384 [2024-12-05 06:39:24.761381] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:29.384 [2024-12-05 06:39:24.761396] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:29.384 [2024-12-05 06:39:24.761405] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761410] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761414] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa82540) 00:14:29.384 [2024-12-05 06:39:24.761421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.384 [2024-12-05 06:39:24.761441] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb220, cid 0, qid 0 00:14:29.384 [2024-12-05 06:39:24.761529] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:29.384 [2024-12-05 06:39:24.761536] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:29.384 [2024-12-05 06:39:24.761540] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761544] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa82540): datao=0, datal=4096, cccid=0 00:14:29.384 [2024-12-05 06:39:24.761549] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xabb220) on tqpair(0xa82540): expected_datao=0, payload_size=4096 00:14:29.384 [2024-12-05 06:39:24.761558] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761563] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.384 [2024-12-05 06:39:24.761578] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.384 [2024-12-05 06:39:24.761582] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761585] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb220) on tqpair=0xa82540 00:14:29.384 [2024-12-05 06:39:24.761594] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:29.384 [2024-12-05 06:39:24.761599] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:29.384 [2024-12-05 06:39:24.761604] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:29.384 [2024-12-05 06:39:24.761609] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:29.384 [2024-12-05 06:39:24.761614] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:29.384 [2024-12-05 06:39:24.761619] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:29.384 [2024-12-05 06:39:24.761632] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:29.384 [2024-12-05 06:39:24.761641] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761645] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761649] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa82540) 00:14:29.384 [2024-12-05 06:39:24.761657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:29.384 [2024-12-05 06:39:24.761676] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb220, cid 0, qid 0 00:14:29.384 [2024-12-05 06:39:24.761735] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.384 [2024-12-05 06:39:24.761742] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.384 [2024-12-05 06:39:24.761746] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761750] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb220) on tqpair=0xa82540 00:14:29.384 [2024-12-05 06:39:24.761758] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761766] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa82540) 00:14:29.384 [2024-12-05 06:39:24.761773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.384 [2024-12-05 06:39:24.761780] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761784] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761787] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa82540) 00:14:29.384 [2024-12-05 06:39:24.761793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.384 [2024-12-05 06:39:24.761799] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761803] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761807] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa82540) 00:14:29.384 [2024-12-05 06:39:24.761813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.384 [2024-12-05 06:39:24.761819] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761823] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761826] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.384 [2024-12-05 06:39:24.761832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.384 [2024-12-05 06:39:24.761837] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:29.384 [2024-12-05 06:39:24.761850] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:29.384 [2024-12-05 06:39:24.761857] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761861] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.384 [2024-12-05 06:39:24.761865] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa82540) 00:14:29.385 [2024-12-05 06:39:24.761872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.385 [2024-12-05 06:39:24.761893] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb220, cid 0, qid 0 00:14:29.385 [2024-12-05 06:39:24.761900] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb380, cid 1, qid 0 00:14:29.385 [2024-12-05 06:39:24.761905] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb4e0, cid 2, qid 0 00:14:29.385 [2024-12-05 06:39:24.761910] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.385 [2024-12-05 06:39:24.761915] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb7a0, cid 4, qid 0 00:14:29.385 [2024-12-05 06:39:24.761999] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.385 [2024-12-05 06:39:24.762006] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.385 [2024-12-05 06:39:24.762010] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762014] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb7a0) on tqpair=0xa82540 00:14:29.385 [2024-12-05 06:39:24.762020] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:29.385 [2024-12-05 06:39:24.762025] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:29.385 [2024-12-05 06:39:24.762034] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:29.385 [2024-12-05 06:39:24.762044] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:29.385 [2024-12-05 06:39:24.762051] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762056] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762059] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa82540) 00:14:29.385 [2024-12-05 06:39:24.762067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:29.385 [2024-12-05 06:39:24.762084] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb7a0, cid 4, qid 0 00:14:29.385 [2024-12-05 06:39:24.762134] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.385 [2024-12-05 06:39:24.762141] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.385 [2024-12-05 06:39:24.762145] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762149] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb7a0) on tqpair=0xa82540 00:14:29.385 [2024-12-05 06:39:24.762209] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:29.385 [2024-12-05 06:39:24.762219] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:29.385 [2024-12-05 06:39:24.762227] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762232] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762235] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa82540) 00:14:29.385 [2024-12-05 06:39:24.762243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.385 [2024-12-05 06:39:24.762260] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb7a0, cid 4, qid 0 00:14:29.385 [2024-12-05 06:39:24.762340] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:29.385 [2024-12-05 06:39:24.762349] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:29.385 [2024-12-05 06:39:24.762353] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762357] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa82540): datao=0, datal=4096, cccid=4 00:14:29.385 [2024-12-05 06:39:24.762362] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xabb7a0) on tqpair(0xa82540): expected_datao=0, payload_size=4096 00:14:29.385 [2024-12-05 06:39:24.762370] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762374] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762382] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.385 [2024-12-05 06:39:24.762389] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.385 [2024-12-05 06:39:24.762393] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762397] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb7a0) on tqpair=0xa82540 00:14:29.385 [2024-12-05 06:39:24.762412] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:29.385 [2024-12-05 06:39:24.762422] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:29.385 [2024-12-05 06:39:24.762432] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:29.385 [2024-12-05 06:39:24.762440] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762445] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762448] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa82540) 00:14:29.385 [2024-12-05 06:39:24.762456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.385 [2024-12-05 06:39:24.762477] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb7a0, cid 4, qid 0 00:14:29.385 [2024-12-05 06:39:24.762546] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:29.385 [2024-12-05 06:39:24.762553] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:29.385 [2024-12-05 06:39:24.762557] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762561] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa82540): datao=0, datal=4096, cccid=4 00:14:29.385 [2024-12-05 06:39:24.762566] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xabb7a0) on tqpair(0xa82540): expected_datao=0, payload_size=4096 00:14:29.385 [2024-12-05 06:39:24.762573] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762578] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762586] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.385 [2024-12-05 06:39:24.762592] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.385 [2024-12-05 06:39:24.762596] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762600] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb7a0) on tqpair=0xa82540 00:14:29.385 [2024-12-05 06:39:24.762614] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:29.385 [2024-12-05 06:39:24.762625] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:29.385 [2024-12-05 06:39:24.762634] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762638] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762642] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa82540) 00:14:29.385 [2024-12-05 06:39:24.762650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.385 [2024-12-05 06:39:24.762669] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb7a0, cid 4, qid 0 00:14:29.385 [2024-12-05 06:39:24.762725] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:29.385 [2024-12-05 06:39:24.762732] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:29.385 [2024-12-05 06:39:24.762736] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762739] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa82540): datao=0, datal=4096, cccid=4 00:14:29.385 [2024-12-05 06:39:24.762744] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xabb7a0) on tqpair(0xa82540): expected_datao=0, payload_size=4096 00:14:29.385 [2024-12-05 06:39:24.762752] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762756] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762764] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.385 [2024-12-05 06:39:24.762771] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.385 [2024-12-05 06:39:24.762774] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762778] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb7a0) on tqpair=0xa82540 00:14:29.385 [2024-12-05 06:39:24.762787] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:29.385 [2024-12-05 06:39:24.762796] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:29.385 [2024-12-05 06:39:24.762806] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:29.385 [2024-12-05 06:39:24.762813] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:29.385 [2024-12-05 06:39:24.762819] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:29.385 [2024-12-05 06:39:24.762825] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:29.385 [2024-12-05 06:39:24.762830] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:29.385 [2024-12-05 06:39:24.762835] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:29.385 [2024-12-05 06:39:24.762867] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762877] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762881] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa82540) 00:14:29.385 [2024-12-05 06:39:24.762889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.385 [2024-12-05 06:39:24.762897] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762901] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.385 [2024-12-05 06:39:24.762904] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa82540) 00:14:29.385 [2024-12-05 06:39:24.762911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.385 [2024-12-05 06:39:24.762940] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb7a0, cid 4, qid 0 00:14:29.385 [2024-12-05 06:39:24.762948] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb900, cid 5, qid 0 00:14:29.385 [2024-12-05 06:39:24.763019] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.386 [2024-12-05 06:39:24.763026] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.386 [2024-12-05 06:39:24.763030] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763034] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb7a0) on tqpair=0xa82540 00:14:29.386 [2024-12-05 06:39:24.763041] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.386 [2024-12-05 06:39:24.763047] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.386 [2024-12-05 06:39:24.763051] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763055] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb900) on tqpair=0xa82540 00:14:29.386 [2024-12-05 06:39:24.763065] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763070] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763074] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa82540) 00:14:29.386 [2024-12-05 06:39:24.763081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.386 [2024-12-05 06:39:24.763098] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb900, cid 5, qid 0 00:14:29.386 [2024-12-05 06:39:24.763147] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.386 [2024-12-05 06:39:24.763154] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.386 [2024-12-05 06:39:24.763158] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763162] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb900) on tqpair=0xa82540 00:14:29.386 [2024-12-05 06:39:24.763172] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763177] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763180] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa82540) 00:14:29.386 [2024-12-05 06:39:24.763187] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.386 [2024-12-05 06:39:24.763203] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb900, cid 5, qid 0 00:14:29.386 [2024-12-05 06:39:24.763252] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.386 [2024-12-05 06:39:24.763259] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.386 [2024-12-05 06:39:24.763263] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763267] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb900) on tqpair=0xa82540 00:14:29.386 [2024-12-05 06:39:24.763277] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763292] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763312] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa82540) 00:14:29.386 [2024-12-05 06:39:24.763335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.386 [2024-12-05 06:39:24.763369] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb900, cid 5, qid 0 00:14:29.386 [2024-12-05 06:39:24.763417] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.386 [2024-12-05 06:39:24.763425] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.386 [2024-12-05 06:39:24.763429] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763433] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb900) on tqpair=0xa82540 00:14:29.386 [2024-12-05 06:39:24.763448] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763454] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763458] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa82540) 00:14:29.386 [2024-12-05 06:39:24.763466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.386 [2024-12-05 06:39:24.763474] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763478] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763482] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa82540) 00:14:29.386 [2024-12-05 06:39:24.763489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.386 [2024-12-05 06:39:24.763497] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763501] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763505] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa82540) 00:14:29.386 [2024-12-05 06:39:24.763512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.386 [2024-12-05 06:39:24.763521] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763525] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763529] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa82540) 00:14:29.386 [2024-12-05 06:39:24.763536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.386 [2024-12-05 06:39:24.763556] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb900, cid 5, qid 0 00:14:29.386 [2024-12-05 06:39:24.763563] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb7a0, cid 4, qid 0 00:14:29.386 [2024-12-05 06:39:24.763568] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabba60, cid 6, qid 0 00:14:29.386 [2024-12-05 06:39:24.763573] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabbbc0, cid 7, qid 0 00:14:29.386 [2024-12-05 06:39:24.763720] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:29.386 [2024-12-05 06:39:24.763741] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:29.386 [2024-12-05 06:39:24.763746] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763749] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa82540): datao=0, datal=8192, cccid=5 00:14:29.386 [2024-12-05 06:39:24.763754] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xabb900) on tqpair(0xa82540): expected_datao=0, payload_size=8192 00:14:29.386 [2024-12-05 06:39:24.763773] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763778] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763784] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:29.386 [2024-12-05 06:39:24.763790] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:29.386 [2024-12-05 06:39:24.763794] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763798] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa82540): datao=0, datal=512, cccid=4 00:14:29.386 [2024-12-05 06:39:24.763802] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xabb7a0) on tqpair(0xa82540): expected_datao=0, payload_size=512 00:14:29.386 [2024-12-05 06:39:24.763809] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763813] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763819] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:29.386 [2024-12-05 06:39:24.763825] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:29.386 [2024-12-05 06:39:24.763828] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763832] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa82540): datao=0, datal=512, cccid=6 00:14:29.386 [2024-12-05 06:39:24.763836] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xabba60) on tqpair(0xa82540): expected_datao=0, payload_size=512 00:14:29.386 [2024-12-05 06:39:24.763843] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763847] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763853] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:29.386 [2024-12-05 06:39:24.763859] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:29.386 [2024-12-05 06:39:24.763862] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763866] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa82540): datao=0, datal=4096, cccid=7 00:14:29.386 [2024-12-05 06:39:24.763870] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xabbbc0) on tqpair(0xa82540): expected_datao=0, payload_size=4096 00:14:29.386 [2024-12-05 06:39:24.763877] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763881] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763889] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.386 [2024-12-05 06:39:24.763896] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.386 [2024-12-05 06:39:24.763900] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.386 [2024-12-05 06:39:24.763904] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb900) on tqpair=0xa82540 00:14:29.386 [2024-12-05 06:39:24.763920] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.386 [2024-12-05 06:39:24.763927] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.386 ===================================================== 00:14:29.386 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:29.386 ===================================================== 00:14:29.386 Controller Capabilities/Features 00:14:29.386 ================================ 00:14:29.386 Vendor ID: 8086 00:14:29.386 Subsystem Vendor ID: 8086 00:14:29.386 Serial Number: SPDK00000000000001 00:14:29.386 Model Number: SPDK bdev Controller 00:14:29.386 Firmware Version: 24.01.1 00:14:29.386 Recommended Arb Burst: 6 00:14:29.386 IEEE OUI Identifier: e4 d2 5c 00:14:29.386 Multi-path I/O 00:14:29.386 May have multiple subsystem ports: Yes 00:14:29.386 May have multiple controllers: Yes 00:14:29.386 Associated with SR-IOV VF: No 00:14:29.386 Max Data Transfer Size: 131072 00:14:29.386 Max Number of Namespaces: 32 00:14:29.386 Max Number of I/O Queues: 127 00:14:29.386 NVMe Specification Version (VS): 1.3 00:14:29.386 NVMe Specification Version (Identify): 1.3 00:14:29.386 Maximum Queue Entries: 128 00:14:29.386 Contiguous Queues Required: Yes 00:14:29.386 Arbitration Mechanisms Supported 00:14:29.386 Weighted Round Robin: Not Supported 00:14:29.386 Vendor Specific: Not Supported 00:14:29.386 Reset Timeout: 15000 ms 00:14:29.386 Doorbell Stride: 4 bytes 00:14:29.386 NVM Subsystem Reset: Not Supported 00:14:29.386 Command Sets Supported 00:14:29.386 NVM Command Set: Supported 00:14:29.386 Boot Partition: Not Supported 00:14:29.386 Memory Page Size Minimum: 4096 bytes 00:14:29.387 Memory Page Size Maximum: 4096 bytes 00:14:29.387 Persistent Memory Region: Not Supported 00:14:29.387 Optional Asynchronous Events Supported 00:14:29.387 Namespace Attribute Notices: Supported 00:14:29.387 Firmware Activation Notices: Not Supported 00:14:29.387 ANA Change Notices: Not Supported 00:14:29.387 PLE Aggregate Log Change Notices: Not Supported 00:14:29.387 LBA Status Info Alert Notices: Not Supported 00:14:29.387 EGE Aggregate Log Change Notices: Not Supported 00:14:29.387 Normal NVM Subsystem Shutdown event: Not Supported 00:14:29.387 Zone Descriptor Change Notices: Not Supported 00:14:29.387 Discovery Log Change Notices: Not Supported 00:14:29.387 Controller Attributes 00:14:29.387 128-bit Host Identifier: Supported 00:14:29.387 Non-Operational Permissive Mode: Not Supported 00:14:29.387 NVM Sets: Not Supported 00:14:29.387 Read Recovery Levels: Not Supported 00:14:29.387 Endurance Groups: Not Supported 00:14:29.387 Predictable Latency Mode: Not Supported 00:14:29.387 Traffic Based Keep ALive: Not Supported 00:14:29.387 Namespace Granularity: Not Supported 00:14:29.387 SQ Associations: Not Supported 00:14:29.387 UUID List: Not Supported 00:14:29.387 Multi-Domain Subsystem: Not Supported 00:14:29.387 Fixed Capacity Management: Not Supported 00:14:29.387 Variable Capacity Management: Not Supported 00:14:29.387 Delete Endurance Group: Not Supported 00:14:29.387 Delete NVM Set: Not Supported 00:14:29.387 Extended LBA Formats Supported: Not Supported 00:14:29.387 Flexible Data Placement Supported: Not Supported 00:14:29.387 00:14:29.387 Controller Memory Buffer Support 00:14:29.387 ================================ 00:14:29.387 Supported: No 00:14:29.387 00:14:29.387 Persistent Memory Region Support 00:14:29.387 ================================ 00:14:29.387 Supported: No 00:14:29.387 00:14:29.387 Admin Command Set Attributes 00:14:29.387 ============================ 00:14:29.387 Security Send/Receive: Not Supported 00:14:29.387 Format NVM: Not Supported 00:14:29.387 Firmware Activate/Download: Not Supported 00:14:29.387 Namespace Management: Not Supported 00:14:29.387 Device Self-Test: Not Supported 00:14:29.387 Directives: Not Supported 00:14:29.387 NVMe-MI: Not Supported 00:14:29.387 Virtualization Management: Not Supported 00:14:29.387 Doorbell Buffer Config: Not Supported 00:14:29.387 Get LBA Status Capability: Not Supported 00:14:29.387 Command & Feature Lockdown Capability: Not Supported 00:14:29.387 Abort Command Limit: 4 00:14:29.387 Async Event Request Limit: 4 00:14:29.387 Number of Firmware Slots: N/A 00:14:29.387 Firmware Slot 1 Read-Only: N/A 00:14:29.387 Firmware Activation Without Reset: [2024-12-05 06:39:24.763931] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.387 [2024-12-05 06:39:24.763935] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb7a0) on tqpair=0xa82540 00:14:29.387 [2024-12-05 06:39:24.763945] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.387 [2024-12-05 06:39:24.763951] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.387 [2024-12-05 06:39:24.763955] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.387 [2024-12-05 06:39:24.763959] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabba60) on tqpair=0xa82540 00:14:29.387 [2024-12-05 06:39:24.763966] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.387 [2024-12-05 06:39:24.763973] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.387 [2024-12-05 06:39:24.763976] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.387 [2024-12-05 06:39:24.763980] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabbbc0) on tqpair=0xa82540 00:14:29.387 N/A 00:14:29.387 Multiple Update Detection Support: N/A 00:14:29.387 Firmware Update Granularity: No Information Provided 00:14:29.387 Per-Namespace SMART Log: No 00:14:29.387 Asymmetric Namespace Access Log Page: Not Supported 00:14:29.387 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:29.387 Command Effects Log Page: Supported 00:14:29.387 Get Log Page Extended Data: Supported 00:14:29.387 Telemetry Log Pages: Not Supported 00:14:29.387 Persistent Event Log Pages: Not Supported 00:14:29.387 Supported Log Pages Log Page: May Support 00:14:29.387 Commands Supported & Effects Log Page: Not Supported 00:14:29.387 Feature Identifiers & Effects Log Page:May Support 00:14:29.387 NVMe-MI Commands & Effects Log Page: May Support 00:14:29.387 Data Area 4 for Telemetry Log: Not Supported 00:14:29.387 Error Log Page Entries Supported: 128 00:14:29.387 Keep Alive: Supported 00:14:29.387 Keep Alive Granularity: 10000 ms 00:14:29.387 00:14:29.387 NVM Command Set Attributes 00:14:29.387 ========================== 00:14:29.387 Submission Queue Entry Size 00:14:29.387 Max: 64 00:14:29.387 Min: 64 00:14:29.387 Completion Queue Entry Size 00:14:29.387 Max: 16 00:14:29.387 Min: 16 00:14:29.387 Number of Namespaces: 32 00:14:29.387 Compare Command: Supported 00:14:29.387 Write Uncorrectable Command: Not Supported 00:14:29.387 Dataset Management Command: Supported 00:14:29.387 Write Zeroes Command: Supported 00:14:29.387 Set Features Save Field: Not Supported 00:14:29.387 Reservations: Supported 00:14:29.387 Timestamp: Not Supported 00:14:29.387 Copy: Supported 00:14:29.387 Volatile Write Cache: Present 00:14:29.387 Atomic Write Unit (Normal): 1 00:14:29.387 Atomic Write Unit (PFail): 1 00:14:29.387 Atomic Compare & Write Unit: 1 00:14:29.387 Fused Compare & Write: Supported 00:14:29.387 Scatter-Gather List 00:14:29.387 SGL Command Set: Supported 00:14:29.387 SGL Keyed: Supported 00:14:29.387 SGL Bit Bucket Descriptor: Not Supported 00:14:29.387 SGL Metadata Pointer: Not Supported 00:14:29.387 Oversized SGL: Not Supported 00:14:29.387 SGL Metadata Address: Not Supported 00:14:29.387 SGL Offset: Supported 00:14:29.387 Transport SGL Data Block: Not Supported 00:14:29.387 Replay Protected Memory Block: Not Supported 00:14:29.387 00:14:29.387 Firmware Slot Information 00:14:29.387 ========================= 00:14:29.387 Active slot: 1 00:14:29.387 Slot 1 Firmware Revision: 24.01.1 00:14:29.387 00:14:29.387 00:14:29.387 Commands Supported and Effects 00:14:29.387 ============================== 00:14:29.387 Admin Commands 00:14:29.387 -------------- 00:14:29.387 Get Log Page (02h): Supported 00:14:29.387 Identify (06h): Supported 00:14:29.387 Abort (08h): Supported 00:14:29.387 Set Features (09h): Supported 00:14:29.387 Get Features (0Ah): Supported 00:14:29.387 Asynchronous Event Request (0Ch): Supported 00:14:29.387 Keep Alive (18h): Supported 00:14:29.387 I/O Commands 00:14:29.387 ------------ 00:14:29.387 Flush (00h): Supported LBA-Change 00:14:29.387 Write (01h): Supported LBA-Change 00:14:29.387 Read (02h): Supported 00:14:29.387 Compare (05h): Supported 00:14:29.387 Write Zeroes (08h): Supported LBA-Change 00:14:29.387 Dataset Management (09h): Supported LBA-Change 00:14:29.387 Copy (19h): Supported LBA-Change 00:14:29.387 Unknown (79h): Supported LBA-Change 00:14:29.387 Unknown (7Ah): Supported 00:14:29.387 00:14:29.387 Error Log 00:14:29.387 ========= 00:14:29.387 00:14:29.387 Arbitration 00:14:29.387 =========== 00:14:29.387 Arbitration Burst: 1 00:14:29.387 00:14:29.387 Power Management 00:14:29.387 ================ 00:14:29.387 Number of Power States: 1 00:14:29.387 Current Power State: Power State #0 00:14:29.387 Power State #0: 00:14:29.387 Max Power: 0.00 W 00:14:29.387 Non-Operational State: Operational 00:14:29.387 Entry Latency: Not Reported 00:14:29.387 Exit Latency: Not Reported 00:14:29.387 Relative Read Throughput: 0 00:14:29.387 Relative Read Latency: 0 00:14:29.387 Relative Write Throughput: 0 00:14:29.387 Relative Write Latency: 0 00:14:29.387 Idle Power: Not Reported 00:14:29.387 Active Power: Not Reported 00:14:29.387 Non-Operational Permissive Mode: Not Supported 00:14:29.387 00:14:29.387 Health Information 00:14:29.387 ================== 00:14:29.387 Critical Warnings: 00:14:29.387 Available Spare Space: OK 00:14:29.387 Temperature: OK 00:14:29.387 Device Reliability: OK 00:14:29.387 Read Only: No 00:14:29.387 Volatile Memory Backup: OK 00:14:29.387 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:29.387 Temperature Threshold: [2024-12-05 06:39:24.764093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.387 [2024-12-05 06:39:24.764102] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.387 [2024-12-05 06:39:24.764106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa82540) 00:14:29.387 [2024-12-05 06:39:24.764114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.387 [2024-12-05 06:39:24.764137] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabbbc0, cid 7, qid 0 00:14:29.387 [2024-12-05 06:39:24.764188] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.387 [2024-12-05 06:39:24.764195] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.387 [2024-12-05 06:39:24.764199] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.387 [2024-12-05 06:39:24.764203] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabbbc0) on tqpair=0xa82540 00:14:29.388 [2024-12-05 06:39:24.764237] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:29.388 [2024-12-05 06:39:24.764250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.388 [2024-12-05 06:39:24.764257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.388 [2024-12-05 06:39:24.764263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.388 [2024-12-05 06:39:24.764270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.388 [2024-12-05 06:39:24.764279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.764283] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.764287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.388 [2024-12-05 06:39:24.764294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.388 [2024-12-05 06:39:24.764315] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.388 [2024-12-05 06:39:24.768374] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.388 [2024-12-05 06:39:24.768396] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.388 [2024-12-05 06:39:24.768402] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.768406] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.388 [2024-12-05 06:39:24.768416] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.768421] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.768425] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.388 [2024-12-05 06:39:24.768434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.388 [2024-12-05 06:39:24.768463] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.388 [2024-12-05 06:39:24.768532] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.388 [2024-12-05 06:39:24.768539] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.388 [2024-12-05 06:39:24.768543] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.768547] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.388 [2024-12-05 06:39:24.768553] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:29.388 [2024-12-05 06:39:24.768558] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:29.388 [2024-12-05 06:39:24.768576] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.768581] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.768585] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.388 [2024-12-05 06:39:24.768592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.388 [2024-12-05 06:39:24.768610] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.388 [2024-12-05 06:39:24.768658] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.388 [2024-12-05 06:39:24.768665] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.388 [2024-12-05 06:39:24.768669] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.768673] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.388 [2024-12-05 06:39:24.768685] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.768689] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.768693] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.388 [2024-12-05 06:39:24.768701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.388 [2024-12-05 06:39:24.768732] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.388 [2024-12-05 06:39:24.768795] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.388 [2024-12-05 06:39:24.768803] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.388 [2024-12-05 06:39:24.768807] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.768811] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.388 [2024-12-05 06:39:24.768821] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.768826] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.768830] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.388 [2024-12-05 06:39:24.768838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.388 [2024-12-05 06:39:24.768854] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.388 [2024-12-05 06:39:24.768902] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.388 [2024-12-05 06:39:24.768909] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.388 [2024-12-05 06:39:24.768913] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.768917] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.388 [2024-12-05 06:39:24.768928] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.768933] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.768937] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.388 [2024-12-05 06:39:24.768944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.388 [2024-12-05 06:39:24.768961] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.388 [2024-12-05 06:39:24.769009] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.388 [2024-12-05 06:39:24.769016] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.388 [2024-12-05 06:39:24.769020] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.769024] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.388 [2024-12-05 06:39:24.769034] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.769039] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.769043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.388 [2024-12-05 06:39:24.769050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.388 [2024-12-05 06:39:24.769067] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.388 [2024-12-05 06:39:24.769111] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.388 [2024-12-05 06:39:24.769118] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.388 [2024-12-05 06:39:24.769122] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.769126] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.388 [2024-12-05 06:39:24.769137] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.769142] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.769145] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.388 [2024-12-05 06:39:24.769153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.388 [2024-12-05 06:39:24.769170] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.388 [2024-12-05 06:39:24.769228] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.388 [2024-12-05 06:39:24.769235] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.388 [2024-12-05 06:39:24.769239] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.769243] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.388 [2024-12-05 06:39:24.769253] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.769258] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.388 [2024-12-05 06:39:24.769262] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.388 [2024-12-05 06:39:24.769269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.388 [2024-12-05 06:39:24.769285] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.388 [2024-12-05 06:39:24.769345] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.388 [2024-12-05 06:39:24.769352] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.389 [2024-12-05 06:39:24.769356] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769360] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.389 [2024-12-05 06:39:24.769371] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769389] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769393] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.389 [2024-12-05 06:39:24.769401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.389 [2024-12-05 06:39:24.769420] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.389 [2024-12-05 06:39:24.769476] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.389 [2024-12-05 06:39:24.769487] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.389 [2024-12-05 06:39:24.769492] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769496] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.389 [2024-12-05 06:39:24.769507] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769512] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769516] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.389 [2024-12-05 06:39:24.769524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.389 [2024-12-05 06:39:24.769541] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.389 [2024-12-05 06:39:24.769584] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.389 [2024-12-05 06:39:24.769590] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.389 [2024-12-05 06:39:24.769594] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769599] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.389 [2024-12-05 06:39:24.769609] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769614] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769618] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.389 [2024-12-05 06:39:24.769625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.389 [2024-12-05 06:39:24.769642] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.389 [2024-12-05 06:39:24.769688] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.389 [2024-12-05 06:39:24.769695] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.389 [2024-12-05 06:39:24.769699] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769703] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.389 [2024-12-05 06:39:24.769714] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769719] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769737] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.389 [2024-12-05 06:39:24.769744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.389 [2024-12-05 06:39:24.769761] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.389 [2024-12-05 06:39:24.769804] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.389 [2024-12-05 06:39:24.769811] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.389 [2024-12-05 06:39:24.769815] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769819] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.389 [2024-12-05 06:39:24.769829] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769834] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769837] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.389 [2024-12-05 06:39:24.769844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.389 [2024-12-05 06:39:24.769861] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.389 [2024-12-05 06:39:24.769910] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.389 [2024-12-05 06:39:24.769920] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.389 [2024-12-05 06:39:24.769925] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769929] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.389 [2024-12-05 06:39:24.769940] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769944] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.769948] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.389 [2024-12-05 06:39:24.769956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.389 [2024-12-05 06:39:24.769973] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.389 [2024-12-05 06:39:24.770022] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.389 [2024-12-05 06:39:24.770029] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.389 [2024-12-05 06:39:24.770033] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770037] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.389 [2024-12-05 06:39:24.770047] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770052] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770056] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.389 [2024-12-05 06:39:24.770063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.389 [2024-12-05 06:39:24.770079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.389 [2024-12-05 06:39:24.770129] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.389 [2024-12-05 06:39:24.770135] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.389 [2024-12-05 06:39:24.770139] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770143] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.389 [2024-12-05 06:39:24.770153] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770158] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770162] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.389 [2024-12-05 06:39:24.770169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.389 [2024-12-05 06:39:24.770185] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.389 [2024-12-05 06:39:24.770232] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.389 [2024-12-05 06:39:24.770239] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.389 [2024-12-05 06:39:24.770243] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770246] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.389 [2024-12-05 06:39:24.770257] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770262] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.389 [2024-12-05 06:39:24.770272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.389 [2024-12-05 06:39:24.770289] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.389 [2024-12-05 06:39:24.770363] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.389 [2024-12-05 06:39:24.770371] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.389 [2024-12-05 06:39:24.770376] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770380] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.389 [2024-12-05 06:39:24.770391] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770396] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770415] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.389 [2024-12-05 06:39:24.770423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.389 [2024-12-05 06:39:24.770444] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.389 [2024-12-05 06:39:24.770490] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.389 [2024-12-05 06:39:24.770497] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.389 [2024-12-05 06:39:24.770501] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770506] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.389 [2024-12-05 06:39:24.770517] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770522] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770526] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.389 [2024-12-05 06:39:24.770533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.389 [2024-12-05 06:39:24.770551] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.389 [2024-12-05 06:39:24.770603] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.389 [2024-12-05 06:39:24.770611] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.389 [2024-12-05 06:39:24.770615] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770619] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.389 [2024-12-05 06:39:24.770630] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770635] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.389 [2024-12-05 06:39:24.770639] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.389 [2024-12-05 06:39:24.770646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.389 [2024-12-05 06:39:24.770664] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.389 [2024-12-05 06:39:24.770728] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.390 [2024-12-05 06:39:24.770735] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.390 [2024-12-05 06:39:24.770739] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.770743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.390 [2024-12-05 06:39:24.770754] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.770759] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.770763] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.390 [2024-12-05 06:39:24.770770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.390 [2024-12-05 06:39:24.770801] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.390 [2024-12-05 06:39:24.770845] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.390 [2024-12-05 06:39:24.770852] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.390 [2024-12-05 06:39:24.770856] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.770860] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.390 [2024-12-05 06:39:24.770870] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.770875] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.770878] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.390 [2024-12-05 06:39:24.770886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.390 [2024-12-05 06:39:24.770902] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.390 [2024-12-05 06:39:24.770946] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.390 [2024-12-05 06:39:24.770952] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.390 [2024-12-05 06:39:24.770956] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.770960] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.390 [2024-12-05 06:39:24.770971] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.770975] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.770979] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.390 [2024-12-05 06:39:24.770986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.390 [2024-12-05 06:39:24.771002] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.390 [2024-12-05 06:39:24.771051] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.390 [2024-12-05 06:39:24.771058] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.390 [2024-12-05 06:39:24.771062] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771066] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.390 [2024-12-05 06:39:24.771077] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771081] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771085] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.390 [2024-12-05 06:39:24.771092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.390 [2024-12-05 06:39:24.771108] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.390 [2024-12-05 06:39:24.771158] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.390 [2024-12-05 06:39:24.771165] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.390 [2024-12-05 06:39:24.771169] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771172] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.390 [2024-12-05 06:39:24.771183] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771187] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771191] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.390 [2024-12-05 06:39:24.771198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.390 [2024-12-05 06:39:24.771214] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.390 [2024-12-05 06:39:24.771262] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.390 [2024-12-05 06:39:24.771268] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.390 [2024-12-05 06:39:24.771272] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771276] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.390 [2024-12-05 06:39:24.771311] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771316] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771320] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.390 [2024-12-05 06:39:24.771328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.390 [2024-12-05 06:39:24.771362] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.390 [2024-12-05 06:39:24.771410] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.390 [2024-12-05 06:39:24.771417] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.390 [2024-12-05 06:39:24.771421] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771426] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.390 [2024-12-05 06:39:24.771437] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771442] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771446] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.390 [2024-12-05 06:39:24.771454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.390 [2024-12-05 06:39:24.771471] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.390 [2024-12-05 06:39:24.771518] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.390 [2024-12-05 06:39:24.771525] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.390 [2024-12-05 06:39:24.771529] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771533] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.390 [2024-12-05 06:39:24.771544] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771549] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771554] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.390 [2024-12-05 06:39:24.771561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.390 [2024-12-05 06:39:24.771578] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.390 [2024-12-05 06:39:24.771645] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.390 [2024-12-05 06:39:24.771652] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.390 [2024-12-05 06:39:24.771656] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771660] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.390 [2024-12-05 06:39:24.771685] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771689] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771693] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.390 [2024-12-05 06:39:24.771700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.390 [2024-12-05 06:39:24.771716] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.390 [2024-12-05 06:39:24.771762] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.390 [2024-12-05 06:39:24.771769] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.390 [2024-12-05 06:39:24.771773] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771777] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.390 [2024-12-05 06:39:24.771787] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771792] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771795] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.390 [2024-12-05 06:39:24.771803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.390 [2024-12-05 06:39:24.771819] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.390 [2024-12-05 06:39:24.771862] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.390 [2024-12-05 06:39:24.771869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.390 [2024-12-05 06:39:24.771873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771877] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.390 [2024-12-05 06:39:24.771887] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771892] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771896] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.390 [2024-12-05 06:39:24.771903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.390 [2024-12-05 06:39:24.771919] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.390 [2024-12-05 06:39:24.771965] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.390 [2024-12-05 06:39:24.771972] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.390 [2024-12-05 06:39:24.771976] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771980] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.390 [2024-12-05 06:39:24.771990] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771995] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.390 [2024-12-05 06:39:24.771998] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.390 [2024-12-05 06:39:24.772005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.390 [2024-12-05 06:39:24.772022] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.391 [2024-12-05 06:39:24.772068] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.391 [2024-12-05 06:39:24.772074] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.391 [2024-12-05 06:39:24.772078] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772082] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.391 [2024-12-05 06:39:24.772092] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772097] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772101] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.391 [2024-12-05 06:39:24.772108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.391 [2024-12-05 06:39:24.772124] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.391 [2024-12-05 06:39:24.772173] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.391 [2024-12-05 06:39:24.772180] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.391 [2024-12-05 06:39:24.772184] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772188] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.391 [2024-12-05 06:39:24.772198] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772202] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772206] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.391 [2024-12-05 06:39:24.772213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.391 [2024-12-05 06:39:24.772230] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.391 [2024-12-05 06:39:24.772281] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.391 [2024-12-05 06:39:24.772288] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.391 [2024-12-05 06:39:24.772292] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772296] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.391 [2024-12-05 06:39:24.772306] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772311] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772315] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.391 [2024-12-05 06:39:24.772322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.391 [2024-12-05 06:39:24.772338] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.391 [2024-12-05 06:39:24.772400] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.391 [2024-12-05 06:39:24.772407] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.391 [2024-12-05 06:39:24.772411] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772415] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.391 [2024-12-05 06:39:24.772426] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772431] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772435] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.391 [2024-12-05 06:39:24.772442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.391 [2024-12-05 06:39:24.772460] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.391 [2024-12-05 06:39:24.772504] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.391 [2024-12-05 06:39:24.772511] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.391 [2024-12-05 06:39:24.772515] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772519] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.391 [2024-12-05 06:39:24.772529] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772534] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772537] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.391 [2024-12-05 06:39:24.772545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.391 [2024-12-05 06:39:24.772561] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.391 [2024-12-05 06:39:24.772619] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.391 [2024-12-05 06:39:24.772626] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.391 [2024-12-05 06:39:24.772629] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772633] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.391 [2024-12-05 06:39:24.772644] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772648] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772652] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.391 [2024-12-05 06:39:24.772659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.391 [2024-12-05 06:39:24.772676] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.391 [2024-12-05 06:39:24.772723] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.391 [2024-12-05 06:39:24.772736] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.391 [2024-12-05 06:39:24.772740] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.391 [2024-12-05 06:39:24.772755] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772760] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772780] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.391 [2024-12-05 06:39:24.772788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.391 [2024-12-05 06:39:24.772804] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.391 [2024-12-05 06:39:24.772856] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.391 [2024-12-05 06:39:24.772863] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.391 [2024-12-05 06:39:24.772867] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772871] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.391 [2024-12-05 06:39:24.772882] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772887] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772891] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.391 [2024-12-05 06:39:24.772898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.391 [2024-12-05 06:39:24.772915] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.391 [2024-12-05 06:39:24.772965] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.391 [2024-12-05 06:39:24.772972] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.391 [2024-12-05 06:39:24.772976] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772980] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.391 [2024-12-05 06:39:24.772991] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.772996] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.773000] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.391 [2024-12-05 06:39:24.773007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.391 [2024-12-05 06:39:24.773024] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.391 [2024-12-05 06:39:24.773072] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.391 [2024-12-05 06:39:24.773079] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.391 [2024-12-05 06:39:24.773083] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.773087] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.391 [2024-12-05 06:39:24.773098] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.773102] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.773106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.391 [2024-12-05 06:39:24.773114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.391 [2024-12-05 06:39:24.773131] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.391 [2024-12-05 06:39:24.773188] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.391 [2024-12-05 06:39:24.773195] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.391 [2024-12-05 06:39:24.773198] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.773202] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.391 [2024-12-05 06:39:24.773213] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.773218] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.773222] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.391 [2024-12-05 06:39:24.773229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.391 [2024-12-05 06:39:24.773245] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.391 [2024-12-05 06:39:24.773288] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.391 [2024-12-05 06:39:24.773295] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.391 [2024-12-05 06:39:24.773298] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.773302] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.391 [2024-12-05 06:39:24.773313] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.773317] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.391 [2024-12-05 06:39:24.773321] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.391 [2024-12-05 06:39:24.773328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.391 [2024-12-05 06:39:24.773370] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.392 [2024-12-05 06:39:24.773420] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.392 [2024-12-05 06:39:24.773427] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.392 [2024-12-05 06:39:24.773431] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773436] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.392 [2024-12-05 06:39:24.773446] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773451] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773455] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.392 [2024-12-05 06:39:24.773463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.392 [2024-12-05 06:39:24.773481] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.392 [2024-12-05 06:39:24.773527] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.392 [2024-12-05 06:39:24.773534] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.392 [2024-12-05 06:39:24.773538] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773542] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.392 [2024-12-05 06:39:24.773553] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773558] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773562] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.392 [2024-12-05 06:39:24.773570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.392 [2024-12-05 06:39:24.773586] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.392 [2024-12-05 06:39:24.773642] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.392 [2024-12-05 06:39:24.773649] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.392 [2024-12-05 06:39:24.773653] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773657] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.392 [2024-12-05 06:39:24.773668] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773673] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773676] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.392 [2024-12-05 06:39:24.773699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.392 [2024-12-05 06:39:24.773715] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.392 [2024-12-05 06:39:24.773761] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.392 [2024-12-05 06:39:24.773768] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.392 [2024-12-05 06:39:24.773772] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773776] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.392 [2024-12-05 06:39:24.773786] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773791] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773795] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.392 [2024-12-05 06:39:24.773802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.392 [2024-12-05 06:39:24.773818] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.392 [2024-12-05 06:39:24.773867] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.392 [2024-12-05 06:39:24.773874] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.392 [2024-12-05 06:39:24.773878] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773882] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.392 [2024-12-05 06:39:24.773892] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773897] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773901] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.392 [2024-12-05 06:39:24.773908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.392 [2024-12-05 06:39:24.773925] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.392 [2024-12-05 06:39:24.773970] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.392 [2024-12-05 06:39:24.773977] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.392 [2024-12-05 06:39:24.773981] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.773985] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.392 [2024-12-05 06:39:24.773995] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.774000] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.774004] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.392 [2024-12-05 06:39:24.774011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.392 [2024-12-05 06:39:24.774027] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.392 [2024-12-05 06:39:24.774073] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.392 [2024-12-05 06:39:24.774088] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.392 [2024-12-05 06:39:24.774093] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.774097] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.392 [2024-12-05 06:39:24.774108] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.774112] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.774116] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.392 [2024-12-05 06:39:24.774124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.392 [2024-12-05 06:39:24.774141] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.392 [2024-12-05 06:39:24.774191] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.392 [2024-12-05 06:39:24.774197] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.392 [2024-12-05 06:39:24.774201] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.774205] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.392 [2024-12-05 06:39:24.774215] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.774220] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.774224] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.392 [2024-12-05 06:39:24.774232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.392 [2024-12-05 06:39:24.774264] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.392 [2024-12-05 06:39:24.774312] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.392 [2024-12-05 06:39:24.774334] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.392 [2024-12-05 06:39:24.774356] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.774360] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.392 [2024-12-05 06:39:24.774371] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.774376] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.774379] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.392 [2024-12-05 06:39:24.774387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.392 [2024-12-05 06:39:24.774405] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.392 [2024-12-05 06:39:24.774452] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.392 [2024-12-05 06:39:24.774459] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.392 [2024-12-05 06:39:24.774463] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.774467] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.392 [2024-12-05 06:39:24.774477] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.774482] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.774486] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.392 [2024-12-05 06:39:24.774493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.392 [2024-12-05 06:39:24.774510] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.392 [2024-12-05 06:39:24.774554] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.392 [2024-12-05 06:39:24.774560] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.392 [2024-12-05 06:39:24.774564] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.392 [2024-12-05 06:39:24.774568] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.393 [2024-12-05 06:39:24.774579] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.774583] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.774587] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.393 [2024-12-05 06:39:24.774594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.393 [2024-12-05 06:39:24.774611] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.393 [2024-12-05 06:39:24.774658] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.393 [2024-12-05 06:39:24.774665] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.393 [2024-12-05 06:39:24.774669] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.774673] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.393 [2024-12-05 06:39:24.774683] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.774688] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.774692] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.393 [2024-12-05 06:39:24.774714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.393 [2024-12-05 06:39:24.774730] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.393 [2024-12-05 06:39:24.774780] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.393 [2024-12-05 06:39:24.774787] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.393 [2024-12-05 06:39:24.774791] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.774794] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.393 [2024-12-05 06:39:24.774805] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.774809] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.774813] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.393 [2024-12-05 06:39:24.774836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.393 [2024-12-05 06:39:24.774853] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.393 [2024-12-05 06:39:24.774899] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.393 [2024-12-05 06:39:24.774905] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.393 [2024-12-05 06:39:24.774909] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.774913] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.393 [2024-12-05 06:39:24.774923] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.774928] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.774932] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.393 [2024-12-05 06:39:24.774939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.393 [2024-12-05 06:39:24.774956] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.393 [2024-12-05 06:39:24.775004] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.393 [2024-12-05 06:39:24.775011] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.393 [2024-12-05 06:39:24.775015] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775019] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.393 [2024-12-05 06:39:24.775029] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775034] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775038] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.393 [2024-12-05 06:39:24.775045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.393 [2024-12-05 06:39:24.775061] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.393 [2024-12-05 06:39:24.775110] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.393 [2024-12-05 06:39:24.775121] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.393 [2024-12-05 06:39:24.775125] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775129] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.393 [2024-12-05 06:39:24.775140] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775145] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775149] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.393 [2024-12-05 06:39:24.775156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.393 [2024-12-05 06:39:24.775184] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.393 [2024-12-05 06:39:24.775229] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.393 [2024-12-05 06:39:24.775236] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.393 [2024-12-05 06:39:24.775239] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775243] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.393 [2024-12-05 06:39:24.775253] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775258] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775261] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.393 [2024-12-05 06:39:24.775268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.393 [2024-12-05 06:39:24.775293] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.393 [2024-12-05 06:39:24.775367] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.393 [2024-12-05 06:39:24.775375] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.393 [2024-12-05 06:39:24.775379] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775383] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.393 [2024-12-05 06:39:24.775394] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775400] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775404] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.393 [2024-12-05 06:39:24.775411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.393 [2024-12-05 06:39:24.775431] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.393 [2024-12-05 06:39:24.775477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.393 [2024-12-05 06:39:24.775484] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.393 [2024-12-05 06:39:24.775488] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775492] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.393 [2024-12-05 06:39:24.775503] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775508] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775512] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.393 [2024-12-05 06:39:24.775519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.393 [2024-12-05 06:39:24.775536] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.393 [2024-12-05 06:39:24.775584] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.393 [2024-12-05 06:39:24.775591] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.393 [2024-12-05 06:39:24.775595] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775599] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.393 [2024-12-05 06:39:24.775625] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775629] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775633] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.393 [2024-12-05 06:39:24.775640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.393 [2024-12-05 06:39:24.775656] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.393 [2024-12-05 06:39:24.775715] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.393 [2024-12-05 06:39:24.775730] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.393 [2024-12-05 06:39:24.775734] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775738] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.393 [2024-12-05 06:39:24.775749] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775753] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775757] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.393 [2024-12-05 06:39:24.775764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.393 [2024-12-05 06:39:24.775781] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.393 [2024-12-05 06:39:24.775823] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.393 [2024-12-05 06:39:24.775830] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.393 [2024-12-05 06:39:24.775834] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775838] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.393 [2024-12-05 06:39:24.775848] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775852] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.393 [2024-12-05 06:39:24.775856] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.393 [2024-12-05 06:39:24.775863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.393 [2024-12-05 06:39:24.775879] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.393 [2024-12-05 06:39:24.775926] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.393 [2024-12-05 06:39:24.775942] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.393 [2024-12-05 06:39:24.775946] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.775950] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.394 [2024-12-05 06:39:24.775960] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.775965] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.775969] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.394 [2024-12-05 06:39:24.775976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.394 [2024-12-05 06:39:24.775993] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.394 [2024-12-05 06:39:24.776036] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.394 [2024-12-05 06:39:24.776046] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.394 [2024-12-05 06:39:24.776050] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.776054] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.394 [2024-12-05 06:39:24.776065] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.776069] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.776073] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.394 [2024-12-05 06:39:24.776080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.394 [2024-12-05 06:39:24.776097] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.394 [2024-12-05 06:39:24.776141] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.394 [2024-12-05 06:39:24.776148] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.394 [2024-12-05 06:39:24.776152] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.776156] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.394 [2024-12-05 06:39:24.776166] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.776170] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.776174] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.394 [2024-12-05 06:39:24.776181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.394 [2024-12-05 06:39:24.776197] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.394 [2024-12-05 06:39:24.776248] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.394 [2024-12-05 06:39:24.776255] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.394 [2024-12-05 06:39:24.776258] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.776262] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.394 [2024-12-05 06:39:24.776272] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.776277] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.776280] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.394 [2024-12-05 06:39:24.776287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.394 [2024-12-05 06:39:24.776303] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.394 [2024-12-05 06:39:24.780398] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.394 [2024-12-05 06:39:24.780419] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.394 [2024-12-05 06:39:24.780441] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.780446] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.394 [2024-12-05 06:39:24.780460] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.780466] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.780470] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa82540) 00:14:29.394 [2024-12-05 06:39:24.780479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:29.394 [2024-12-05 06:39:24.780504] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xabb640, cid 3, qid 0 00:14:29.394 [2024-12-05 06:39:24.780554] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:29.394 [2024-12-05 06:39:24.780561] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:29.394 [2024-12-05 06:39:24.780565] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:29.394 [2024-12-05 06:39:24.780569] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xabb640) on tqpair=0xa82540 00:14:29.394 [2024-12-05 06:39:24.780578] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 12 milliseconds 00:14:29.394 0 Kelvin (-273 Celsius) 00:14:29.394 Available Spare: 0% 00:14:29.394 Available Spare Threshold: 0% 00:14:29.394 Life Percentage Used: 0% 00:14:29.394 Data Units Read: 0 00:14:29.394 Data Units Written: 0 00:14:29.394 Host Read Commands: 0 00:14:29.394 Host Write Commands: 0 00:14:29.394 Controller Busy Time: 0 minutes 00:14:29.394 Power Cycles: 0 00:14:29.394 Power On Hours: 0 hours 00:14:29.394 Unsafe Shutdowns: 0 00:14:29.394 Unrecoverable Media Errors: 0 00:14:29.394 Lifetime Error Log Entries: 0 00:14:29.394 Warning Temperature Time: 0 minutes 00:14:29.394 Critical Temperature Time: 0 minutes 00:14:29.394 00:14:29.394 Number of Queues 00:14:29.394 ================ 00:14:29.394 Number of I/O Submission Queues: 127 00:14:29.394 Number of I/O Completion Queues: 127 00:14:29.394 00:14:29.394 Active Namespaces 00:14:29.394 ================= 00:14:29.394 Namespace ID:1 00:14:29.394 Error Recovery Timeout: Unlimited 00:14:29.394 Command Set Identifier: NVM (00h) 00:14:29.394 Deallocate: Supported 00:14:29.394 Deallocated/Unwritten Error: Not Supported 00:14:29.394 Deallocated Read Value: Unknown 00:14:29.394 Deallocate in Write Zeroes: Not Supported 00:14:29.394 Deallocated Guard Field: 0xFFFF 00:14:29.394 Flush: Supported 00:14:29.394 Reservation: Supported 00:14:29.394 Namespace Sharing Capabilities: Multiple Controllers 00:14:29.394 Size (in LBAs): 131072 (0GiB) 00:14:29.394 Capacity (in LBAs): 131072 (0GiB) 00:14:29.394 Utilization (in LBAs): 131072 (0GiB) 00:14:29.394 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:29.394 EUI64: ABCDEF0123456789 00:14:29.394 UUID: a0847a6d-f244-4c9f-afca-1e49f94fe0a4 00:14:29.394 Thin Provisioning: Not Supported 00:14:29.394 Per-NS Atomic Units: Yes 00:14:29.394 Atomic Boundary Size (Normal): 0 00:14:29.394 Atomic Boundary Size (PFail): 0 00:14:29.394 Atomic Boundary Offset: 0 00:14:29.394 Maximum Single Source Range Length: 65535 00:14:29.394 Maximum Copy Length: 65535 00:14:29.394 Maximum Source Range Count: 1 00:14:29.394 NGUID/EUI64 Never Reused: No 00:14:29.394 Namespace Write Protected: No 00:14:29.394 Number of LBA Formats: 1 00:14:29.394 Current LBA Format: LBA Format #00 00:14:29.394 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:29.394 00:14:29.394 06:39:24 -- host/identify.sh@51 -- # sync 00:14:29.394 06:39:24 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.394 06:39:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.394 06:39:24 -- common/autotest_common.sh@10 -- # set +x 00:14:29.654 06:39:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.654 06:39:24 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:29.654 06:39:24 -- host/identify.sh@56 -- # nvmftestfini 00:14:29.654 06:39:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:29.654 06:39:24 -- nvmf/common.sh@116 -- # sync 00:14:29.654 06:39:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:29.654 06:39:24 -- nvmf/common.sh@119 -- # set +e 00:14:29.654 06:39:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:29.654 06:39:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:29.654 rmmod nvme_tcp 00:14:29.654 rmmod nvme_fabrics 00:14:29.654 rmmod nvme_keyring 00:14:29.654 06:39:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:29.654 06:39:24 -- nvmf/common.sh@123 -- # set -e 00:14:29.654 06:39:24 -- nvmf/common.sh@124 -- # return 0 00:14:29.654 06:39:24 -- nvmf/common.sh@477 -- # '[' -n 80018 ']' 00:14:29.654 06:39:24 -- nvmf/common.sh@478 -- # killprocess 80018 00:14:29.654 06:39:24 -- common/autotest_common.sh@936 -- # '[' -z 80018 ']' 00:14:29.654 06:39:24 -- common/autotest_common.sh@940 -- # kill -0 80018 00:14:29.654 06:39:24 -- common/autotest_common.sh@941 -- # uname 00:14:29.654 06:39:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:29.654 06:39:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80018 00:14:29.654 killing process with pid 80018 00:14:29.654 06:39:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:29.654 06:39:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:29.654 06:39:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80018' 00:14:29.654 06:39:24 -- common/autotest_common.sh@955 -- # kill 80018 00:14:29.655 [2024-12-05 06:39:24.935023] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:29.655 06:39:24 -- common/autotest_common.sh@960 -- # wait 80018 00:14:29.655 06:39:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:29.655 06:39:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:29.655 06:39:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:29.655 06:39:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:29.655 06:39:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:29.655 06:39:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.655 06:39:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.655 06:39:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.914 06:39:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:29.914 00:14:29.914 real 0m2.412s 00:14:29.914 user 0m6.826s 00:14:29.914 sys 0m0.564s 00:14:29.914 06:39:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:29.914 ************************************ 00:14:29.914 END TEST nvmf_identify 00:14:29.914 ************************************ 00:14:29.914 06:39:25 -- common/autotest_common.sh@10 -- # set +x 00:14:29.914 06:39:25 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:29.914 06:39:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:29.914 06:39:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.914 06:39:25 -- common/autotest_common.sh@10 -- # set +x 00:14:29.915 ************************************ 00:14:29.915 START TEST nvmf_perf 00:14:29.915 ************************************ 00:14:29.915 06:39:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:29.915 * Looking for test storage... 00:14:29.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:29.915 06:39:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:29.915 06:39:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:29.915 06:39:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:29.915 06:39:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:29.915 06:39:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:29.915 06:39:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:29.915 06:39:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:29.915 06:39:25 -- scripts/common.sh@335 -- # IFS=.-: 00:14:29.915 06:39:25 -- scripts/common.sh@335 -- # read -ra ver1 00:14:29.915 06:39:25 -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.915 06:39:25 -- scripts/common.sh@336 -- # read -ra ver2 00:14:29.915 06:39:25 -- scripts/common.sh@337 -- # local 'op=<' 00:14:29.915 06:39:25 -- scripts/common.sh@339 -- # ver1_l=2 00:14:29.915 06:39:25 -- scripts/common.sh@340 -- # ver2_l=1 00:14:29.915 06:39:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:29.915 06:39:25 -- scripts/common.sh@343 -- # case "$op" in 00:14:29.915 06:39:25 -- scripts/common.sh@344 -- # : 1 00:14:29.915 06:39:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:29.915 06:39:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.915 06:39:25 -- scripts/common.sh@364 -- # decimal 1 00:14:29.915 06:39:25 -- scripts/common.sh@352 -- # local d=1 00:14:29.915 06:39:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.915 06:39:25 -- scripts/common.sh@354 -- # echo 1 00:14:29.915 06:39:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:29.915 06:39:25 -- scripts/common.sh@365 -- # decimal 2 00:14:29.915 06:39:25 -- scripts/common.sh@352 -- # local d=2 00:14:29.915 06:39:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.915 06:39:25 -- scripts/common.sh@354 -- # echo 2 00:14:29.915 06:39:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:29.915 06:39:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:29.915 06:39:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:29.915 06:39:25 -- scripts/common.sh@367 -- # return 0 00:14:29.915 06:39:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.915 06:39:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:29.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.915 --rc genhtml_branch_coverage=1 00:14:29.915 --rc genhtml_function_coverage=1 00:14:29.915 --rc genhtml_legend=1 00:14:29.915 --rc geninfo_all_blocks=1 00:14:29.915 --rc geninfo_unexecuted_blocks=1 00:14:29.915 00:14:29.915 ' 00:14:29.915 06:39:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:29.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.915 --rc genhtml_branch_coverage=1 00:14:29.915 --rc genhtml_function_coverage=1 00:14:29.915 --rc genhtml_legend=1 00:14:29.915 --rc geninfo_all_blocks=1 00:14:29.915 --rc geninfo_unexecuted_blocks=1 00:14:29.915 00:14:29.915 ' 00:14:29.915 06:39:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:29.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.915 --rc genhtml_branch_coverage=1 00:14:29.915 --rc genhtml_function_coverage=1 00:14:29.915 --rc genhtml_legend=1 00:14:29.915 --rc geninfo_all_blocks=1 00:14:29.915 --rc geninfo_unexecuted_blocks=1 00:14:29.915 00:14:29.915 ' 00:14:29.915 06:39:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:29.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.915 --rc genhtml_branch_coverage=1 00:14:29.915 --rc genhtml_function_coverage=1 00:14:29.915 --rc genhtml_legend=1 00:14:29.915 --rc geninfo_all_blocks=1 00:14:29.915 --rc geninfo_unexecuted_blocks=1 00:14:29.915 00:14:29.915 ' 00:14:29.915 06:39:25 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:29.915 06:39:25 -- nvmf/common.sh@7 -- # uname -s 00:14:29.915 06:39:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.915 06:39:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.915 06:39:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.915 06:39:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.915 06:39:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.915 06:39:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.915 06:39:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.915 06:39:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.915 06:39:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.915 06:39:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.915 06:39:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:14:29.915 06:39:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:14:29.915 06:39:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.915 06:39:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.915 06:39:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:29.915 06:39:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:29.915 06:39:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.915 06:39:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.915 06:39:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.915 06:39:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.915 06:39:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.915 06:39:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.174 06:39:25 -- paths/export.sh@5 -- # export PATH 00:14:30.174 06:39:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.174 06:39:25 -- nvmf/common.sh@46 -- # : 0 00:14:30.174 06:39:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:30.174 06:39:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:30.174 06:39:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:30.174 06:39:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.174 06:39:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.174 06:39:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:30.174 06:39:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:30.174 06:39:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:30.174 06:39:25 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:30.174 06:39:25 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:30.174 06:39:25 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:30.174 06:39:25 -- host/perf.sh@17 -- # nvmftestinit 00:14:30.174 06:39:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:30.174 06:39:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.174 06:39:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:30.174 06:39:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:30.174 06:39:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:30.174 06:39:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.174 06:39:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.174 06:39:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.174 06:39:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:30.174 06:39:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:30.174 06:39:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:30.174 06:39:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:30.174 06:39:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:30.174 06:39:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:30.174 06:39:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.174 06:39:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.174 06:39:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:30.174 06:39:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:30.174 06:39:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:30.174 06:39:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:30.174 06:39:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:30.174 06:39:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.174 06:39:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:30.174 06:39:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:30.174 06:39:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:30.174 06:39:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:30.174 06:39:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:30.174 06:39:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:30.174 Cannot find device "nvmf_tgt_br" 00:14:30.174 06:39:25 -- nvmf/common.sh@154 -- # true 00:14:30.174 06:39:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:30.174 Cannot find device "nvmf_tgt_br2" 00:14:30.174 06:39:25 -- nvmf/common.sh@155 -- # true 00:14:30.174 06:39:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:30.174 06:39:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:30.174 Cannot find device "nvmf_tgt_br" 00:14:30.174 06:39:25 -- nvmf/common.sh@157 -- # true 00:14:30.174 06:39:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:30.174 Cannot find device "nvmf_tgt_br2" 00:14:30.174 06:39:25 -- nvmf/common.sh@158 -- # true 00:14:30.174 06:39:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:30.174 06:39:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:30.174 06:39:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:30.174 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.174 06:39:25 -- nvmf/common.sh@161 -- # true 00:14:30.174 06:39:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:30.174 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.174 06:39:25 -- nvmf/common.sh@162 -- # true 00:14:30.174 06:39:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:30.174 06:39:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:30.174 06:39:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:30.174 06:39:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:30.174 06:39:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:30.174 06:39:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:30.174 06:39:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:30.174 06:39:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:30.174 06:39:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:30.174 06:39:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:30.174 06:39:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:30.174 06:39:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:30.174 06:39:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:30.174 06:39:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:30.432 06:39:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:30.433 06:39:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:30.433 06:39:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:30.433 06:39:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:30.433 06:39:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:30.433 06:39:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:30.433 06:39:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:30.433 06:39:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:30.433 06:39:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:30.433 06:39:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:30.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:14:30.433 00:14:30.433 --- 10.0.0.2 ping statistics --- 00:14:30.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.433 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:30.433 06:39:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:30.433 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:30.433 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:14:30.433 00:14:30.433 --- 10.0.0.3 ping statistics --- 00:14:30.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.433 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:30.433 06:39:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:30.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:14:30.433 00:14:30.433 --- 10.0.0.1 ping statistics --- 00:14:30.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.433 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:30.433 06:39:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.433 06:39:25 -- nvmf/common.sh@421 -- # return 0 00:14:30.433 06:39:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:30.433 06:39:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.433 06:39:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:30.433 06:39:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:30.433 06:39:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.433 06:39:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:30.433 06:39:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:30.433 06:39:25 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:30.433 06:39:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:30.433 06:39:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:30.433 06:39:25 -- common/autotest_common.sh@10 -- # set +x 00:14:30.433 06:39:25 -- nvmf/common.sh@469 -- # nvmfpid=80226 00:14:30.433 06:39:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:30.433 06:39:25 -- nvmf/common.sh@470 -- # waitforlisten 80226 00:14:30.433 06:39:25 -- common/autotest_common.sh@829 -- # '[' -z 80226 ']' 00:14:30.433 06:39:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.433 06:39:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:30.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.433 06:39:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.433 06:39:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:30.433 06:39:25 -- common/autotest_common.sh@10 -- # set +x 00:14:30.433 [2024-12-05 06:39:25.794857] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:30.433 [2024-12-05 06:39:25.794944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.691 [2024-12-05 06:39:25.934245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.691 [2024-12-05 06:39:25.967183] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:30.691 [2024-12-05 06:39:25.967394] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.691 [2024-12-05 06:39:25.967410] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.691 [2024-12-05 06:39:25.967419] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.691 [2024-12-05 06:39:25.967528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.691 [2024-12-05 06:39:25.968092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.691 [2024-12-05 06:39:25.968367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.691 [2024-12-05 06:39:25.968373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.691 06:39:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.691 06:39:26 -- common/autotest_common.sh@862 -- # return 0 00:14:30.691 06:39:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:30.691 06:39:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:30.691 06:39:26 -- common/autotest_common.sh@10 -- # set +x 00:14:30.691 06:39:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.691 06:39:26 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:30.691 06:39:26 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:31.257 06:39:26 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:31.257 06:39:26 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:31.516 06:39:26 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:14:31.516 06:39:26 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:31.775 06:39:27 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:31.775 06:39:27 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:14:31.775 06:39:27 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:31.775 06:39:27 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:31.775 06:39:27 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:32.033 [2024-12-05 06:39:27.378260] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.034 06:39:27 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:32.292 06:39:27 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:32.292 06:39:27 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:32.552 06:39:27 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:32.552 06:39:27 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:32.811 06:39:28 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.070 [2024-12-05 06:39:28.415877] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.070 06:39:28 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:33.330 06:39:28 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:14:33.330 06:39:28 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:33.330 06:39:28 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:33.330 06:39:28 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:34.725 Initializing NVMe Controllers 00:14:34.725 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:14:34.725 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:14:34.725 Initialization complete. Launching workers. 00:14:34.725 ======================================================== 00:14:34.725 Latency(us) 00:14:34.725 Device Information : IOPS MiB/s Average min max 00:14:34.725 PCIE (0000:00:06.0) NSID 1 from core 0: 23493.62 91.77 1362.34 299.78 8034.06 00:14:34.725 ======================================================== 00:14:34.725 Total : 23493.62 91.77 1362.34 299.78 8034.06 00:14:34.725 00:14:34.725 06:39:29 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:36.102 Initializing NVMe Controllers 00:14:36.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:36.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:36.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:36.102 Initialization complete. Launching workers. 00:14:36.102 ======================================================== 00:14:36.102 Latency(us) 00:14:36.102 Device Information : IOPS MiB/s Average min max 00:14:36.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3473.76 13.57 287.60 102.76 6222.32 00:14:36.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.99 0.49 8063.56 5887.80 11994.00 00:14:36.102 ======================================================== 00:14:36.102 Total : 3598.75 14.06 557.68 102.76 11994.00 00:14:36.102 00:14:36.102 06:39:31 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:37.476 Initializing NVMe Controllers 00:14:37.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:37.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:37.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:37.476 Initialization complete. Launching workers. 00:14:37.476 ======================================================== 00:14:37.476 Latency(us) 00:14:37.476 Device Information : IOPS MiB/s Average min max 00:14:37.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8737.66 34.13 3664.24 435.81 10266.13 00:14:37.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3966.14 15.49 8092.15 4645.74 16472.86 00:14:37.476 ======================================================== 00:14:37.476 Total : 12703.81 49.62 5046.64 435.81 16472.86 00:14:37.476 00:14:37.476 06:39:32 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:37.476 06:39:32 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:40.019 Initializing NVMe Controllers 00:14:40.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.019 Controller IO queue size 128, less than required. 00:14:40.019 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:40.019 Controller IO queue size 128, less than required. 00:14:40.019 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:40.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:40.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:40.019 Initialization complete. Launching workers. 00:14:40.019 ======================================================== 00:14:40.019 Latency(us) 00:14:40.019 Device Information : IOPS MiB/s Average min max 00:14:40.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1891.16 472.79 69008.08 39522.19 125147.13 00:14:40.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 663.65 165.91 205237.14 115895.44 325251.88 00:14:40.019 ======================================================== 00:14:40.019 Total : 2554.81 638.70 104395.71 39522.19 325251.88 00:14:40.019 00:14:40.019 06:39:35 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:40.019 No valid NVMe controllers or AIO or URING devices found 00:14:40.019 Initializing NVMe Controllers 00:14:40.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.019 Controller IO queue size 128, less than required. 00:14:40.019 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:40.019 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:40.019 Controller IO queue size 128, less than required. 00:14:40.019 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:40.019 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:40.019 WARNING: Some requested NVMe devices were skipped 00:14:40.019 06:39:35 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:42.552 Initializing NVMe Controllers 00:14:42.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.552 Controller IO queue size 128, less than required. 00:14:42.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:42.552 Controller IO queue size 128, less than required. 00:14:42.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:42.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:42.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:42.552 Initialization complete. Launching workers. 00:14:42.552 00:14:42.552 ==================== 00:14:42.552 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:42.552 TCP transport: 00:14:42.552 polls: 8501 00:14:42.552 idle_polls: 0 00:14:42.552 sock_completions: 8501 00:14:42.553 nvme_completions: 6861 00:14:42.553 submitted_requests: 10491 00:14:42.553 queued_requests: 1 00:14:42.553 00:14:42.553 ==================== 00:14:42.553 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:42.553 TCP transport: 00:14:42.553 polls: 8566 00:14:42.553 idle_polls: 0 00:14:42.553 sock_completions: 8566 00:14:42.553 nvme_completions: 6538 00:14:42.553 submitted_requests: 9974 00:14:42.553 queued_requests: 1 00:14:42.553 ======================================================== 00:14:42.553 Latency(us) 00:14:42.553 Device Information : IOPS MiB/s Average min max 00:14:42.553 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1775.30 443.83 73634.38 36626.49 130604.75 00:14:42.553 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1694.47 423.62 76370.26 36724.72 135795.42 00:14:42.553 ======================================================== 00:14:42.553 Total : 3469.77 867.44 74970.45 36626.49 135795.42 00:14:42.553 00:14:42.553 06:39:37 -- host/perf.sh@66 -- # sync 00:14:42.553 06:39:37 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.811 06:39:38 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:14:42.811 06:39:38 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:14:42.811 06:39:38 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:14:43.069 06:39:38 -- host/perf.sh@72 -- # ls_guid=bdf0ba33-5883-4287-a75f-b42d9170502a 00:14:43.069 06:39:38 -- host/perf.sh@73 -- # get_lvs_free_mb bdf0ba33-5883-4287-a75f-b42d9170502a 00:14:43.069 06:39:38 -- common/autotest_common.sh@1353 -- # local lvs_uuid=bdf0ba33-5883-4287-a75f-b42d9170502a 00:14:43.069 06:39:38 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:43.069 06:39:38 -- common/autotest_common.sh@1355 -- # local fc 00:14:43.069 06:39:38 -- common/autotest_common.sh@1356 -- # local cs 00:14:43.069 06:39:38 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:43.327 06:39:38 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:43.327 { 00:14:43.327 "uuid": "bdf0ba33-5883-4287-a75f-b42d9170502a", 00:14:43.327 "name": "lvs_0", 00:14:43.327 "base_bdev": "Nvme0n1", 00:14:43.327 "total_data_clusters": 1278, 00:14:43.327 "free_clusters": 1278, 00:14:43.327 "block_size": 4096, 00:14:43.327 "cluster_size": 4194304 00:14:43.327 } 00:14:43.327 ]' 00:14:43.327 06:39:38 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="bdf0ba33-5883-4287-a75f-b42d9170502a") .free_clusters' 00:14:43.327 06:39:38 -- common/autotest_common.sh@1358 -- # fc=1278 00:14:43.327 06:39:38 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="bdf0ba33-5883-4287-a75f-b42d9170502a") .cluster_size' 00:14:43.327 5112 00:14:43.327 06:39:38 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:43.327 06:39:38 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:14:43.327 06:39:38 -- common/autotest_common.sh@1363 -- # echo 5112 00:14:43.327 06:39:38 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:14:43.327 06:39:38 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bdf0ba33-5883-4287-a75f-b42d9170502a lbd_0 5112 00:14:43.586 06:39:38 -- host/perf.sh@80 -- # lb_guid=06024c7b-2569-4ea3-854a-ed3aa8ffa9a2 00:14:43.586 06:39:38 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 06024c7b-2569-4ea3-854a-ed3aa8ffa9a2 lvs_n_0 00:14:43.845 06:39:39 -- host/perf.sh@83 -- # ls_nested_guid=e4024e08-0607-42ba-ad99-2831d49f16e5 00:14:43.845 06:39:39 -- host/perf.sh@84 -- # get_lvs_free_mb e4024e08-0607-42ba-ad99-2831d49f16e5 00:14:43.845 06:39:39 -- common/autotest_common.sh@1353 -- # local lvs_uuid=e4024e08-0607-42ba-ad99-2831d49f16e5 00:14:43.845 06:39:39 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:43.845 06:39:39 -- common/autotest_common.sh@1355 -- # local fc 00:14:43.845 06:39:39 -- common/autotest_common.sh@1356 -- # local cs 00:14:43.845 06:39:39 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:44.413 06:39:39 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:44.413 { 00:14:44.413 "uuid": "bdf0ba33-5883-4287-a75f-b42d9170502a", 00:14:44.413 "name": "lvs_0", 00:14:44.413 "base_bdev": "Nvme0n1", 00:14:44.413 "total_data_clusters": 1278, 00:14:44.413 "free_clusters": 0, 00:14:44.414 "block_size": 4096, 00:14:44.414 "cluster_size": 4194304 00:14:44.414 }, 00:14:44.414 { 00:14:44.414 "uuid": "e4024e08-0607-42ba-ad99-2831d49f16e5", 00:14:44.414 "name": "lvs_n_0", 00:14:44.414 "base_bdev": "06024c7b-2569-4ea3-854a-ed3aa8ffa9a2", 00:14:44.414 "total_data_clusters": 1276, 00:14:44.414 "free_clusters": 1276, 00:14:44.414 "block_size": 4096, 00:14:44.414 "cluster_size": 4194304 00:14:44.414 } 00:14:44.414 ]' 00:14:44.414 06:39:39 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="e4024e08-0607-42ba-ad99-2831d49f16e5") .free_clusters' 00:14:44.414 06:39:39 -- common/autotest_common.sh@1358 -- # fc=1276 00:14:44.414 06:39:39 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="e4024e08-0607-42ba-ad99-2831d49f16e5") .cluster_size' 00:14:44.414 5104 00:14:44.414 06:39:39 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:44.414 06:39:39 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:14:44.414 06:39:39 -- common/autotest_common.sh@1363 -- # echo 5104 00:14:44.414 06:39:39 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:14:44.414 06:39:39 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e4024e08-0607-42ba-ad99-2831d49f16e5 lbd_nest_0 5104 00:14:44.672 06:39:40 -- host/perf.sh@88 -- # lb_nested_guid=a30718bd-12c5-4295-9976-242aa59100e4 00:14:44.672 06:39:40 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:44.932 06:39:40 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:14:44.932 06:39:40 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a30718bd-12c5-4295-9976-242aa59100e4 00:14:45.192 06:39:40 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.450 06:39:40 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:14:45.450 06:39:40 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:14:45.450 06:39:40 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:45.450 06:39:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:45.450 06:39:40 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:45.709 No valid NVMe controllers or AIO or URING devices found 00:14:45.709 Initializing NVMe Controllers 00:14:45.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:45.709 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:45.709 WARNING: Some requested NVMe devices were skipped 00:14:45.709 06:39:41 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:45.709 06:39:41 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:57.971 Initializing NVMe Controllers 00:14:57.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:57.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:57.971 Initialization complete. Launching workers. 00:14:57.971 ======================================================== 00:14:57.971 Latency(us) 00:14:57.971 Device Information : IOPS MiB/s Average min max 00:14:57.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 948.31 118.54 1053.68 319.03 8322.08 00:14:57.971 ======================================================== 00:14:57.971 Total : 948.31 118.54 1053.68 319.03 8322.08 00:14:57.971 00:14:57.971 06:39:51 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:57.971 06:39:51 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:57.971 06:39:51 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:57.971 No valid NVMe controllers or AIO or URING devices found 00:14:57.971 Initializing NVMe Controllers 00:14:57.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:57.971 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:57.971 WARNING: Some requested NVMe devices were skipped 00:14:57.971 06:39:51 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:57.971 06:39:51 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:07.949 Initializing NVMe Controllers 00:15:07.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:07.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:07.949 Initialization complete. Launching workers. 00:15:07.949 ======================================================== 00:15:07.949 Latency(us) 00:15:07.949 Device Information : IOPS MiB/s Average min max 00:15:07.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1300.92 162.61 24604.74 5998.45 71777.27 00:15:07.949 ======================================================== 00:15:07.949 Total : 1300.92 162.61 24604.74 5998.45 71777.27 00:15:07.949 00:15:07.949 06:40:01 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:07.949 06:40:01 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:07.950 06:40:01 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:07.950 No valid NVMe controllers or AIO or URING devices found 00:15:07.950 Initializing NVMe Controllers 00:15:07.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:07.950 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:07.950 WARNING: Some requested NVMe devices were skipped 00:15:07.950 06:40:02 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:07.950 06:40:02 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:17.980 Initializing NVMe Controllers 00:15:17.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:17.980 Controller IO queue size 128, less than required. 00:15:17.980 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:17.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:17.980 Initialization complete. Launching workers. 00:15:17.980 ======================================================== 00:15:17.980 Latency(us) 00:15:17.980 Device Information : IOPS MiB/s Average min max 00:15:17.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4012.68 501.59 31909.92 11926.80 58352.47 00:15:17.980 ======================================================== 00:15:17.980 Total : 4012.68 501.59 31909.92 11926.80 58352.47 00:15:17.980 00:15:17.980 06:40:12 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.980 06:40:12 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a30718bd-12c5-4295-9976-242aa59100e4 00:15:17.980 06:40:13 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:18.239 06:40:13 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 06024c7b-2569-4ea3-854a-ed3aa8ffa9a2 00:15:18.239 06:40:13 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:18.498 06:40:13 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:18.498 06:40:13 -- host/perf.sh@114 -- # nvmftestfini 00:15:18.498 06:40:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:18.498 06:40:13 -- nvmf/common.sh@116 -- # sync 00:15:18.498 06:40:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:18.498 06:40:13 -- nvmf/common.sh@119 -- # set +e 00:15:18.498 06:40:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:18.498 06:40:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:18.498 rmmod nvme_tcp 00:15:18.498 rmmod nvme_fabrics 00:15:18.758 rmmod nvme_keyring 00:15:18.758 06:40:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:18.758 06:40:13 -- nvmf/common.sh@123 -- # set -e 00:15:18.758 06:40:13 -- nvmf/common.sh@124 -- # return 0 00:15:18.758 06:40:13 -- nvmf/common.sh@477 -- # '[' -n 80226 ']' 00:15:18.758 06:40:13 -- nvmf/common.sh@478 -- # killprocess 80226 00:15:18.758 06:40:13 -- common/autotest_common.sh@936 -- # '[' -z 80226 ']' 00:15:18.758 06:40:13 -- common/autotest_common.sh@940 -- # kill -0 80226 00:15:18.758 06:40:13 -- common/autotest_common.sh@941 -- # uname 00:15:18.758 06:40:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:18.758 06:40:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80226 00:15:18.758 killing process with pid 80226 00:15:18.758 06:40:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:18.758 06:40:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:18.758 06:40:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80226' 00:15:18.758 06:40:14 -- common/autotest_common.sh@955 -- # kill 80226 00:15:18.758 06:40:14 -- common/autotest_common.sh@960 -- # wait 80226 00:15:20.137 06:40:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:20.137 06:40:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:20.137 06:40:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:20.137 06:40:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.137 06:40:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:20.137 06:40:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.137 06:40:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.137 06:40:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.137 06:40:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:20.137 ************************************ 00:15:20.137 END TEST nvmf_perf 00:15:20.137 ************************************ 00:15:20.137 00:15:20.137 real 0m50.356s 00:15:20.137 user 3m9.680s 00:15:20.137 sys 0m12.672s 00:15:20.137 06:40:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:20.137 06:40:15 -- common/autotest_common.sh@10 -- # set +x 00:15:20.137 06:40:15 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:20.137 06:40:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:20.137 06:40:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:20.137 06:40:15 -- common/autotest_common.sh@10 -- # set +x 00:15:20.137 ************************************ 00:15:20.137 START TEST nvmf_fio_host 00:15:20.137 ************************************ 00:15:20.137 06:40:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:20.397 * Looking for test storage... 00:15:20.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:20.397 06:40:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:20.397 06:40:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:20.397 06:40:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:20.397 06:40:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:20.397 06:40:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:20.397 06:40:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:20.397 06:40:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:20.397 06:40:15 -- scripts/common.sh@335 -- # IFS=.-: 00:15:20.397 06:40:15 -- scripts/common.sh@335 -- # read -ra ver1 00:15:20.397 06:40:15 -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.397 06:40:15 -- scripts/common.sh@336 -- # read -ra ver2 00:15:20.397 06:40:15 -- scripts/common.sh@337 -- # local 'op=<' 00:15:20.397 06:40:15 -- scripts/common.sh@339 -- # ver1_l=2 00:15:20.397 06:40:15 -- scripts/common.sh@340 -- # ver2_l=1 00:15:20.397 06:40:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:20.397 06:40:15 -- scripts/common.sh@343 -- # case "$op" in 00:15:20.397 06:40:15 -- scripts/common.sh@344 -- # : 1 00:15:20.397 06:40:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:20.397 06:40:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.397 06:40:15 -- scripts/common.sh@364 -- # decimal 1 00:15:20.397 06:40:15 -- scripts/common.sh@352 -- # local d=1 00:15:20.397 06:40:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.397 06:40:15 -- scripts/common.sh@354 -- # echo 1 00:15:20.397 06:40:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:20.397 06:40:15 -- scripts/common.sh@365 -- # decimal 2 00:15:20.397 06:40:15 -- scripts/common.sh@352 -- # local d=2 00:15:20.397 06:40:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.397 06:40:15 -- scripts/common.sh@354 -- # echo 2 00:15:20.397 06:40:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:20.397 06:40:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:20.397 06:40:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:20.397 06:40:15 -- scripts/common.sh@367 -- # return 0 00:15:20.397 06:40:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.397 06:40:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:20.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.397 --rc genhtml_branch_coverage=1 00:15:20.397 --rc genhtml_function_coverage=1 00:15:20.397 --rc genhtml_legend=1 00:15:20.397 --rc geninfo_all_blocks=1 00:15:20.397 --rc geninfo_unexecuted_blocks=1 00:15:20.397 00:15:20.397 ' 00:15:20.397 06:40:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:20.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.397 --rc genhtml_branch_coverage=1 00:15:20.397 --rc genhtml_function_coverage=1 00:15:20.397 --rc genhtml_legend=1 00:15:20.397 --rc geninfo_all_blocks=1 00:15:20.397 --rc geninfo_unexecuted_blocks=1 00:15:20.397 00:15:20.397 ' 00:15:20.397 06:40:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:20.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.397 --rc genhtml_branch_coverage=1 00:15:20.397 --rc genhtml_function_coverage=1 00:15:20.397 --rc genhtml_legend=1 00:15:20.397 --rc geninfo_all_blocks=1 00:15:20.397 --rc geninfo_unexecuted_blocks=1 00:15:20.397 00:15:20.397 ' 00:15:20.397 06:40:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:20.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.397 --rc genhtml_branch_coverage=1 00:15:20.397 --rc genhtml_function_coverage=1 00:15:20.397 --rc genhtml_legend=1 00:15:20.397 --rc geninfo_all_blocks=1 00:15:20.397 --rc geninfo_unexecuted_blocks=1 00:15:20.397 00:15:20.397 ' 00:15:20.397 06:40:15 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.397 06:40:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.397 06:40:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.397 06:40:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.397 06:40:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.397 06:40:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.397 06:40:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.397 06:40:15 -- paths/export.sh@5 -- # export PATH 00:15:20.397 06:40:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.397 06:40:15 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:20.397 06:40:15 -- nvmf/common.sh@7 -- # uname -s 00:15:20.397 06:40:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.397 06:40:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.397 06:40:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.397 06:40:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.397 06:40:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.397 06:40:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.398 06:40:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.398 06:40:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.398 06:40:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.398 06:40:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.398 06:40:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:15:20.398 06:40:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:15:20.398 06:40:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.398 06:40:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.398 06:40:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:20.398 06:40:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.398 06:40:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.398 06:40:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.398 06:40:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.398 06:40:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.398 06:40:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.398 06:40:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.398 06:40:15 -- paths/export.sh@5 -- # export PATH 00:15:20.398 06:40:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.398 06:40:15 -- nvmf/common.sh@46 -- # : 0 00:15:20.398 06:40:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:20.398 06:40:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:20.398 06:40:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:20.398 06:40:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.398 06:40:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.398 06:40:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:20.398 06:40:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:20.398 06:40:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:20.398 06:40:15 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:20.398 06:40:15 -- host/fio.sh@14 -- # nvmftestinit 00:15:20.398 06:40:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:20.398 06:40:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.398 06:40:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:20.398 06:40:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:20.398 06:40:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:20.398 06:40:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.398 06:40:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.398 06:40:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.398 06:40:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:20.398 06:40:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:20.398 06:40:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:20.398 06:40:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:20.398 06:40:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:20.398 06:40:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:20.398 06:40:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.398 06:40:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.398 06:40:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:20.398 06:40:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:20.398 06:40:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:20.398 06:40:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:20.398 06:40:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:20.398 06:40:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.398 06:40:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:20.398 06:40:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:20.398 06:40:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:20.398 06:40:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:20.398 06:40:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:20.398 06:40:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:20.398 Cannot find device "nvmf_tgt_br" 00:15:20.398 06:40:15 -- nvmf/common.sh@154 -- # true 00:15:20.398 06:40:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:20.398 Cannot find device "nvmf_tgt_br2" 00:15:20.398 06:40:15 -- nvmf/common.sh@155 -- # true 00:15:20.398 06:40:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:20.398 06:40:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:20.398 Cannot find device "nvmf_tgt_br" 00:15:20.398 06:40:15 -- nvmf/common.sh@157 -- # true 00:15:20.398 06:40:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:20.657 Cannot find device "nvmf_tgt_br2" 00:15:20.657 06:40:15 -- nvmf/common.sh@158 -- # true 00:15:20.657 06:40:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:20.657 06:40:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:20.657 06:40:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:20.657 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.657 06:40:15 -- nvmf/common.sh@161 -- # true 00:15:20.657 06:40:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:20.657 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.657 06:40:15 -- nvmf/common.sh@162 -- # true 00:15:20.657 06:40:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:20.657 06:40:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:20.657 06:40:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:20.657 06:40:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:20.657 06:40:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:20.657 06:40:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:20.657 06:40:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:20.657 06:40:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:20.657 06:40:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:20.657 06:40:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:20.657 06:40:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:20.657 06:40:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:20.657 06:40:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:20.657 06:40:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:20.657 06:40:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:20.657 06:40:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:20.657 06:40:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:20.657 06:40:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:20.657 06:40:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:20.657 06:40:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:20.657 06:40:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:20.657 06:40:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:20.657 06:40:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:20.657 06:40:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:20.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:15:20.657 00:15:20.657 --- 10.0.0.2 ping statistics --- 00:15:20.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.657 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:20.657 06:40:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:20.657 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:20.657 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:15:20.657 00:15:20.657 --- 10.0.0.3 ping statistics --- 00:15:20.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.657 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:20.657 06:40:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:20.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:20.657 00:15:20.657 --- 10.0.0.1 ping statistics --- 00:15:20.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.657 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:20.657 06:40:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.657 06:40:16 -- nvmf/common.sh@421 -- # return 0 00:15:20.657 06:40:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:20.657 06:40:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.657 06:40:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:20.657 06:40:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:20.657 06:40:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.657 06:40:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:20.657 06:40:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:20.916 06:40:16 -- host/fio.sh@16 -- # [[ y != y ]] 00:15:20.916 06:40:16 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:20.916 06:40:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:20.916 06:40:16 -- common/autotest_common.sh@10 -- # set +x 00:15:20.916 06:40:16 -- host/fio.sh@24 -- # nvmfpid=81050 00:15:20.916 06:40:16 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:20.916 06:40:16 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:20.916 06:40:16 -- host/fio.sh@28 -- # waitforlisten 81050 00:15:20.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.916 06:40:16 -- common/autotest_common.sh@829 -- # '[' -z 81050 ']' 00:15:20.916 06:40:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.916 06:40:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.916 06:40:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.916 06:40:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.916 06:40:16 -- common/autotest_common.sh@10 -- # set +x 00:15:20.916 [2024-12-05 06:40:16.185779] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:20.916 [2024-12-05 06:40:16.185881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.916 [2024-12-05 06:40:16.326839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.916 [2024-12-05 06:40:16.367865] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:20.916 [2024-12-05 06:40:16.368298] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.916 [2024-12-05 06:40:16.368506] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.916 [2024-12-05 06:40:16.368682] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.916 [2024-12-05 06:40:16.368979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.916 [2024-12-05 06:40:16.369113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.916 [2024-12-05 06:40:16.369184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.916 [2024-12-05 06:40:16.369181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:21.853 06:40:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.853 06:40:17 -- common/autotest_common.sh@862 -- # return 0 00:15:21.853 06:40:17 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:22.112 [2024-12-05 06:40:17.440541] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.112 06:40:17 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:22.112 06:40:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:22.112 06:40:17 -- common/autotest_common.sh@10 -- # set +x 00:15:22.112 06:40:17 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:22.370 Malloc1 00:15:22.370 06:40:17 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:22.629 06:40:18 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:22.890 06:40:18 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.150 [2024-12-05 06:40:18.505964] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.150 06:40:18 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:23.409 06:40:18 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:23.409 06:40:18 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:23.409 06:40:18 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:23.409 06:40:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:23.409 06:40:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:23.409 06:40:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:23.409 06:40:18 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:23.409 06:40:18 -- common/autotest_common.sh@1330 -- # shift 00:15:23.409 06:40:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:23.409 06:40:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:23.409 06:40:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:23.409 06:40:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:23.409 06:40:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:23.409 06:40:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:23.410 06:40:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:23.410 06:40:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:23.410 06:40:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:23.410 06:40:18 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:23.410 06:40:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:23.410 06:40:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:23.410 06:40:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:23.410 06:40:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:23.410 06:40:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:23.669 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:23.669 fio-3.35 00:15:23.669 Starting 1 thread 00:15:26.205 00:15:26.205 test: (groupid=0, jobs=1): err= 0: pid=81129: Thu Dec 5 06:40:21 2024 00:15:26.205 read: IOPS=9525, BW=37.2MiB/s (39.0MB/s)(74.6MiB/2006msec) 00:15:26.205 slat (nsec): min=1806, max=384158, avg=2434.98, stdev=3484.28 00:15:26.205 clat (usec): min=2671, max=12708, avg=6992.05, stdev=513.14 00:15:26.205 lat (usec): min=2705, max=12743, avg=6994.48, stdev=512.98 00:15:26.205 clat percentiles (usec): 00:15:26.205 | 1.00th=[ 5866], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6587], 00:15:26.205 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:15:26.205 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7570], 95.00th=[ 7767], 00:15:26.205 | 99.00th=[ 8160], 99.50th=[ 8455], 99.90th=[10814], 99.95th=[11863], 00:15:26.205 | 99.99th=[12649] 00:15:26.205 bw ( KiB/s): min=36934, max=38896, per=99.91%, avg=38067.50, stdev=826.34, samples=4 00:15:26.205 iops : min= 9233, max= 9724, avg=9516.75, stdev=206.81, samples=4 00:15:26.205 write: IOPS=9534, BW=37.2MiB/s (39.1MB/s)(74.7MiB/2006msec); 0 zone resets 00:15:26.205 slat (nsec): min=1936, max=265385, avg=2512.31, stdev=2469.68 00:15:26.205 clat (usec): min=2525, max=12547, avg=6391.25, stdev=472.66 00:15:26.205 lat (usec): min=2539, max=12550, avg=6393.76, stdev=472.60 00:15:26.205 clat percentiles (usec): 00:15:26.205 | 1.00th=[ 5407], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:15:26.205 | 30.00th=[ 6194], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6456], 00:15:26.205 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7111], 00:15:26.205 | 99.00th=[ 7504], 99.50th=[ 7898], 99.90th=[10290], 99.95th=[11076], 00:15:26.205 | 99.99th=[11863] 00:15:26.205 bw ( KiB/s): min=37756, max=38528, per=99.93%, avg=38111.00, stdev=333.97, samples=4 00:15:26.205 iops : min= 9439, max= 9632, avg=9527.75, stdev=83.49, samples=4 00:15:26.205 lat (msec) : 4=0.08%, 10=99.79%, 20=0.13% 00:15:26.205 cpu : usr=68.53%, sys=23.94%, ctx=6, majf=0, minf=5 00:15:26.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:26.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:26.205 issued rwts: total=19108,19127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.205 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:26.205 00:15:26.205 Run status group 0 (all jobs): 00:15:26.205 READ: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=74.6MiB (78.3MB), run=2006-2006msec 00:15:26.205 WRITE: bw=37.2MiB/s (39.1MB/s), 37.2MiB/s-37.2MiB/s (39.1MB/s-39.1MB/s), io=74.7MiB (78.3MB), run=2006-2006msec 00:15:26.205 06:40:21 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:26.205 06:40:21 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:26.205 06:40:21 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:26.205 06:40:21 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:26.205 06:40:21 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:26.205 06:40:21 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:26.205 06:40:21 -- common/autotest_common.sh@1330 -- # shift 00:15:26.205 06:40:21 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:26.205 06:40:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:26.205 06:40:21 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:26.205 06:40:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:26.205 06:40:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:26.205 06:40:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:26.205 06:40:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:26.205 06:40:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:26.205 06:40:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:26.205 06:40:21 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:26.205 06:40:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:26.205 06:40:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:26.205 06:40:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:26.205 06:40:21 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:26.205 06:40:21 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:26.205 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:26.205 fio-3.35 00:15:26.205 Starting 1 thread 00:15:28.734 00:15:28.734 test: (groupid=0, jobs=1): err= 0: pid=81178: Thu Dec 5 06:40:23 2024 00:15:28.734 read: IOPS=8560, BW=134MiB/s (140MB/s)(269MiB/2011msec) 00:15:28.734 slat (usec): min=2, max=140, avg= 3.75, stdev= 2.45 00:15:28.734 clat (usec): min=2057, max=17808, avg=8124.47, stdev=2757.04 00:15:28.734 lat (usec): min=2060, max=17811, avg=8128.21, stdev=2757.24 00:15:28.734 clat percentiles (usec): 00:15:28.734 | 1.00th=[ 3916], 5.00th=[ 4621], 10.00th=[ 5014], 20.00th=[ 5669], 00:15:28.734 | 30.00th=[ 6325], 40.00th=[ 6980], 50.00th=[ 7635], 60.00th=[ 8356], 00:15:28.734 | 70.00th=[ 9241], 80.00th=[10290], 90.00th=[12125], 95.00th=[13566], 00:15:28.734 | 99.00th=[15926], 99.50th=[16450], 99.90th=[17171], 99.95th=[17433], 00:15:28.734 | 99.99th=[17695] 00:15:28.734 bw ( KiB/s): min=65376, max=75456, per=51.15%, avg=70056.00, stdev=4588.06, samples=4 00:15:28.734 iops : min= 4086, max= 4716, avg=4378.50, stdev=286.75, samples=4 00:15:28.734 write: IOPS=4957, BW=77.5MiB/s (81.2MB/s)(143MiB/1842msec); 0 zone resets 00:15:28.734 slat (usec): min=32, max=4083, avg=39.03, stdev=43.11 00:15:28.734 clat (usec): min=6554, max=29762, avg=11816.79, stdev=2456.13 00:15:28.734 lat (usec): min=6588, max=29808, avg=11855.82, stdev=2460.68 00:15:28.734 clat percentiles (usec): 00:15:28.734 | 1.00th=[ 7832], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[ 9896], 00:15:28.734 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11994], 00:15:28.734 | 70.00th=[12649], 80.00th=[13435], 90.00th=[14615], 95.00th=[15795], 00:15:28.734 | 99.00th=[22414], 99.50th=[24249], 99.90th=[26870], 99.95th=[29492], 00:15:28.734 | 99.99th=[29754] 00:15:28.734 bw ( KiB/s): min=69088, max=79072, per=92.10%, avg=73048.00, stdev=4660.62, samples=4 00:15:28.734 iops : min= 4318, max= 4942, avg=4565.50, stdev=291.29, samples=4 00:15:28.734 lat (msec) : 4=0.78%, 10=56.47%, 20=42.31%, 50=0.44% 00:15:28.734 cpu : usr=80.70%, sys=13.28%, ctx=11, majf=0, minf=1 00:15:28.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:15:28.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:28.734 issued rwts: total=17216,9131,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:28.734 00:15:28.734 Run status group 0 (all jobs): 00:15:28.734 READ: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=269MiB (282MB), run=2011-2011msec 00:15:28.734 WRITE: bw=77.5MiB/s (81.2MB/s), 77.5MiB/s-77.5MiB/s (81.2MB/s-81.2MB/s), io=143MiB (150MB), run=1842-1842msec 00:15:28.734 06:40:23 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.734 06:40:24 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:15:28.734 06:40:24 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:15:28.734 06:40:24 -- host/fio.sh@51 -- # get_nvme_bdfs 00:15:28.734 06:40:24 -- common/autotest_common.sh@1508 -- # bdfs=() 00:15:28.734 06:40:24 -- common/autotest_common.sh@1508 -- # local bdfs 00:15:28.734 06:40:24 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:28.734 06:40:24 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:15:28.734 06:40:24 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:28.734 06:40:24 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:15:28.734 06:40:24 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:15:28.734 06:40:24 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:15:28.993 Nvme0n1 00:15:28.993 06:40:24 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:15:29.252 06:40:24 -- host/fio.sh@53 -- # ls_guid=c9b2502e-833a-4f13-8d58-9d0949c5b4da 00:15:29.252 06:40:24 -- host/fio.sh@54 -- # get_lvs_free_mb c9b2502e-833a-4f13-8d58-9d0949c5b4da 00:15:29.252 06:40:24 -- common/autotest_common.sh@1353 -- # local lvs_uuid=c9b2502e-833a-4f13-8d58-9d0949c5b4da 00:15:29.252 06:40:24 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:29.252 06:40:24 -- common/autotest_common.sh@1355 -- # local fc 00:15:29.252 06:40:24 -- common/autotest_common.sh@1356 -- # local cs 00:15:29.252 06:40:24 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:29.511 06:40:24 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:29.511 { 00:15:29.511 "uuid": "c9b2502e-833a-4f13-8d58-9d0949c5b4da", 00:15:29.511 "name": "lvs_0", 00:15:29.511 "base_bdev": "Nvme0n1", 00:15:29.511 "total_data_clusters": 4, 00:15:29.511 "free_clusters": 4, 00:15:29.511 "block_size": 4096, 00:15:29.511 "cluster_size": 1073741824 00:15:29.511 } 00:15:29.511 ]' 00:15:29.511 06:40:24 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="c9b2502e-833a-4f13-8d58-9d0949c5b4da") .free_clusters' 00:15:29.511 06:40:24 -- common/autotest_common.sh@1358 -- # fc=4 00:15:29.511 06:40:24 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="c9b2502e-833a-4f13-8d58-9d0949c5b4da") .cluster_size' 00:15:29.774 4096 00:15:29.774 06:40:24 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:15:29.774 06:40:24 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:15:29.774 06:40:24 -- common/autotest_common.sh@1363 -- # echo 4096 00:15:29.774 06:40:24 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:15:30.031 6c3d0325-23e4-49c7-8681-629a4daee744 00:15:30.031 06:40:25 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:15:30.289 06:40:25 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:15:30.548 06:40:25 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:30.806 06:40:26 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:30.806 06:40:26 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:30.806 06:40:26 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:30.806 06:40:26 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:30.806 06:40:26 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:30.806 06:40:26 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:30.806 06:40:26 -- common/autotest_common.sh@1330 -- # shift 00:15:30.806 06:40:26 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:30.806 06:40:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:30.806 06:40:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:30.806 06:40:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:30.806 06:40:26 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:30.806 06:40:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:30.806 06:40:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:30.806 06:40:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:30.806 06:40:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:30.806 06:40:26 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:30.806 06:40:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:30.806 06:40:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:30.806 06:40:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:30.806 06:40:26 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:30.806 06:40:26 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:30.806 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:30.806 fio-3.35 00:15:30.806 Starting 1 thread 00:15:33.340 00:15:33.340 test: (groupid=0, jobs=1): err= 0: pid=81286: Thu Dec 5 06:40:28 2024 00:15:33.340 read: IOPS=6540, BW=25.5MiB/s (26.8MB/s)(51.3MiB/2009msec) 00:15:33.340 slat (nsec): min=1953, max=332081, avg=2710.32, stdev=3853.63 00:15:33.340 clat (usec): min=2926, max=17816, avg=10224.40, stdev=842.01 00:15:33.340 lat (usec): min=2935, max=17819, avg=10227.11, stdev=841.70 00:15:33.340 clat percentiles (usec): 00:15:33.340 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:15:33.340 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:15:33.340 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:15:33.340 | 99.00th=[12125], 99.50th=[12387], 99.90th=[15139], 99.95th=[16712], 00:15:33.340 | 99.99th=[17695] 00:15:33.340 bw ( KiB/s): min=24992, max=26808, per=99.92%, avg=26142.00, stdev=806.41, samples=4 00:15:33.340 iops : min= 6248, max= 6702, avg=6535.50, stdev=201.60, samples=4 00:15:33.340 write: IOPS=6548, BW=25.6MiB/s (26.8MB/s)(51.4MiB/2009msec); 0 zone resets 00:15:33.340 slat (usec): min=2, max=246, avg= 2.81, stdev= 2.81 00:15:33.340 clat (usec): min=2397, max=16516, avg=9275.75, stdev=795.41 00:15:33.340 lat (usec): min=2410, max=16518, avg=9278.56, stdev=795.27 00:15:33.340 clat percentiles (usec): 00:15:33.340 | 1.00th=[ 7570], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 8717], 00:15:33.340 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:15:33.340 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10421], 00:15:33.340 | 99.00th=[11076], 99.50th=[11338], 99.90th=[14877], 99.95th=[15270], 00:15:33.340 | 99.99th=[16450] 00:15:33.340 bw ( KiB/s): min=25872, max=26512, per=100.00%, avg=26194.00, stdev=265.84, samples=4 00:15:33.340 iops : min= 6468, max= 6628, avg=6548.50, stdev=66.46, samples=4 00:15:33.340 lat (msec) : 4=0.06%, 10=61.59%, 20=38.35% 00:15:33.340 cpu : usr=74.75%, sys=19.32%, ctx=4, majf=0, minf=5 00:15:33.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:33.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.340 issued rwts: total=13140,13155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.340 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.340 00:15:33.340 Run status group 0 (all jobs): 00:15:33.340 READ: bw=25.5MiB/s (26.8MB/s), 25.5MiB/s-25.5MiB/s (26.8MB/s-26.8MB/s), io=51.3MiB (53.8MB), run=2009-2009msec 00:15:33.340 WRITE: bw=25.6MiB/s (26.8MB/s), 25.6MiB/s-25.6MiB/s (26.8MB/s-26.8MB/s), io=51.4MiB (53.9MB), run=2009-2009msec 00:15:33.340 06:40:28 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:33.598 06:40:28 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:15:33.598 06:40:29 -- host/fio.sh@64 -- # ls_nested_guid=98c0e526-ca13-4d4b-a2d2-18c02e2937f1 00:15:33.598 06:40:29 -- host/fio.sh@65 -- # get_lvs_free_mb 98c0e526-ca13-4d4b-a2d2-18c02e2937f1 00:15:33.598 06:40:29 -- common/autotest_common.sh@1353 -- # local lvs_uuid=98c0e526-ca13-4d4b-a2d2-18c02e2937f1 00:15:33.598 06:40:29 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:33.598 06:40:29 -- common/autotest_common.sh@1355 -- # local fc 00:15:33.598 06:40:29 -- common/autotest_common.sh@1356 -- # local cs 00:15:33.598 06:40:29 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:33.868 06:40:29 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:33.868 { 00:15:33.868 "uuid": "c9b2502e-833a-4f13-8d58-9d0949c5b4da", 00:15:33.868 "name": "lvs_0", 00:15:33.868 "base_bdev": "Nvme0n1", 00:15:33.868 "total_data_clusters": 4, 00:15:33.868 "free_clusters": 0, 00:15:33.868 "block_size": 4096, 00:15:33.868 "cluster_size": 1073741824 00:15:33.868 }, 00:15:33.868 { 00:15:33.868 "uuid": "98c0e526-ca13-4d4b-a2d2-18c02e2937f1", 00:15:33.868 "name": "lvs_n_0", 00:15:33.868 "base_bdev": "6c3d0325-23e4-49c7-8681-629a4daee744", 00:15:33.868 "total_data_clusters": 1022, 00:15:33.868 "free_clusters": 1022, 00:15:33.868 "block_size": 4096, 00:15:33.868 "cluster_size": 4194304 00:15:33.868 } 00:15:33.868 ]' 00:15:33.868 06:40:29 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="98c0e526-ca13-4d4b-a2d2-18c02e2937f1") .free_clusters' 00:15:34.126 06:40:29 -- common/autotest_common.sh@1358 -- # fc=1022 00:15:34.126 06:40:29 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="98c0e526-ca13-4d4b-a2d2-18c02e2937f1") .cluster_size' 00:15:34.126 4088 00:15:34.126 06:40:29 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:34.126 06:40:29 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:15:34.126 06:40:29 -- common/autotest_common.sh@1363 -- # echo 4088 00:15:34.126 06:40:29 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:15:34.126 8c7a352e-706f-41be-84f2-d81aafdee8fc 00:15:34.385 06:40:29 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:15:34.643 06:40:29 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:15:34.643 06:40:30 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:34.900 06:40:30 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:34.900 06:40:30 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:34.900 06:40:30 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:34.900 06:40:30 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:34.900 06:40:30 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:34.900 06:40:30 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:34.900 06:40:30 -- common/autotest_common.sh@1330 -- # shift 00:15:34.900 06:40:30 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:34.900 06:40:30 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.900 06:40:30 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:34.900 06:40:30 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:34.900 06:40:30 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:34.900 06:40:30 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:34.900 06:40:30 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:34.900 06:40:30 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.900 06:40:30 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:34.900 06:40:30 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:15:34.900 06:40:30 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:34.900 06:40:30 -- common/autotest_common.sh@1334 -- # asan_lib= 00:15:34.900 06:40:30 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:15:34.900 06:40:30 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:34.900 06:40:30 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:35.159 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:35.159 fio-3.35 00:15:35.159 Starting 1 thread 00:15:37.691 00:15:37.691 test: (groupid=0, jobs=1): err= 0: pid=81360: Thu Dec 5 06:40:32 2024 00:15:37.691 read: IOPS=5805, BW=22.7MiB/s (23.8MB/s)(45.6MiB/2009msec) 00:15:37.691 slat (usec): min=2, max=313, avg= 2.73, stdev= 3.96 00:15:37.691 clat (usec): min=3218, max=19329, avg=11538.86, stdev=970.49 00:15:37.691 lat (usec): min=3228, max=19332, avg=11541.59, stdev=970.09 00:15:37.691 clat percentiles (usec): 00:15:37.691 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:15:37.691 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:15:37.691 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[13042], 00:15:37.691 | 99.00th=[13566], 99.50th=[14091], 99.90th=[17695], 99.95th=[19006], 00:15:37.691 | 99.99th=[19268] 00:15:37.691 bw ( KiB/s): min=22475, max=23560, per=99.86%, avg=23188.75, stdev=502.37, samples=4 00:15:37.691 iops : min= 5618, max= 5890, avg=5797.00, stdev=125.95, samples=4 00:15:37.691 write: IOPS=5790, BW=22.6MiB/s (23.7MB/s)(45.4MiB/2009msec); 0 zone resets 00:15:37.691 slat (usec): min=2, max=270, avg= 2.83, stdev= 3.06 00:15:37.691 clat (usec): min=2484, max=19362, avg=10447.67, stdev=918.55 00:15:37.691 lat (usec): min=2498, max=19364, avg=10450.50, stdev=918.35 00:15:37.691 clat percentiles (usec): 00:15:37.691 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:15:37.691 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:15:37.691 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11731], 00:15:37.691 | 99.00th=[12387], 99.50th=[12911], 99.90th=[16319], 99.95th=[17695], 00:15:37.691 | 99.99th=[19268] 00:15:37.691 bw ( KiB/s): min=22912, max=23321, per=99.85%, avg=23126.25, stdev=169.38, samples=4 00:15:37.691 iops : min= 5728, max= 5830, avg=5781.50, stdev=42.25, samples=4 00:15:37.691 lat (msec) : 4=0.06%, 10=16.81%, 20=83.13% 00:15:37.691 cpu : usr=73.51%, sys=20.72%, ctx=3, majf=0, minf=5 00:15:37.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:15:37.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:37.691 issued rwts: total=11663,11633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:37.691 00:15:37.691 Run status group 0 (all jobs): 00:15:37.691 READ: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.6MiB (47.8MB), run=2009-2009msec 00:15:37.691 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.4MiB (47.6MB), run=2009-2009msec 00:15:37.691 06:40:32 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:37.691 06:40:33 -- host/fio.sh@74 -- # sync 00:15:37.691 06:40:33 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:15:37.949 06:40:33 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:38.208 06:40:33 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:15:38.467 06:40:33 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:38.726 06:40:34 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:39.293 06:40:34 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:39.293 06:40:34 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:39.293 06:40:34 -- host/fio.sh@86 -- # nvmftestfini 00:15:39.293 06:40:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:39.293 06:40:34 -- nvmf/common.sh@116 -- # sync 00:15:39.293 06:40:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:39.293 06:40:34 -- nvmf/common.sh@119 -- # set +e 00:15:39.294 06:40:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:39.294 06:40:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:39.294 rmmod nvme_tcp 00:15:39.294 rmmod nvme_fabrics 00:15:39.294 rmmod nvme_keyring 00:15:39.294 06:40:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:39.294 06:40:34 -- nvmf/common.sh@123 -- # set -e 00:15:39.294 06:40:34 -- nvmf/common.sh@124 -- # return 0 00:15:39.294 06:40:34 -- nvmf/common.sh@477 -- # '[' -n 81050 ']' 00:15:39.294 06:40:34 -- nvmf/common.sh@478 -- # killprocess 81050 00:15:39.294 06:40:34 -- common/autotest_common.sh@936 -- # '[' -z 81050 ']' 00:15:39.294 06:40:34 -- common/autotest_common.sh@940 -- # kill -0 81050 00:15:39.294 06:40:34 -- common/autotest_common.sh@941 -- # uname 00:15:39.294 06:40:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:39.294 06:40:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81050 00:15:39.553 killing process with pid 81050 00:15:39.553 06:40:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:39.553 06:40:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:39.553 06:40:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81050' 00:15:39.553 06:40:34 -- common/autotest_common.sh@955 -- # kill 81050 00:15:39.553 06:40:34 -- common/autotest_common.sh@960 -- # wait 81050 00:15:39.553 06:40:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:39.553 06:40:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:39.553 06:40:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:39.553 06:40:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.553 06:40:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:39.553 06:40:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.553 06:40:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.553 06:40:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.553 06:40:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:39.553 ************************************ 00:15:39.553 END TEST nvmf_fio_host 00:15:39.553 ************************************ 00:15:39.553 00:15:39.553 real 0m19.365s 00:15:39.553 user 1m25.385s 00:15:39.553 sys 0m4.302s 00:15:39.553 06:40:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:39.553 06:40:34 -- common/autotest_common.sh@10 -- # set +x 00:15:39.553 06:40:34 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:39.553 06:40:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:39.553 06:40:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:39.553 06:40:34 -- common/autotest_common.sh@10 -- # set +x 00:15:39.553 ************************************ 00:15:39.553 START TEST nvmf_failover 00:15:39.553 ************************************ 00:15:39.553 06:40:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:39.811 * Looking for test storage... 00:15:39.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:39.811 06:40:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:39.811 06:40:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:39.811 06:40:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:39.811 06:40:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:39.811 06:40:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:39.811 06:40:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:39.811 06:40:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:39.811 06:40:35 -- scripts/common.sh@335 -- # IFS=.-: 00:15:39.811 06:40:35 -- scripts/common.sh@335 -- # read -ra ver1 00:15:39.811 06:40:35 -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.811 06:40:35 -- scripts/common.sh@336 -- # read -ra ver2 00:15:39.811 06:40:35 -- scripts/common.sh@337 -- # local 'op=<' 00:15:39.811 06:40:35 -- scripts/common.sh@339 -- # ver1_l=2 00:15:39.811 06:40:35 -- scripts/common.sh@340 -- # ver2_l=1 00:15:39.811 06:40:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:39.811 06:40:35 -- scripts/common.sh@343 -- # case "$op" in 00:15:39.811 06:40:35 -- scripts/common.sh@344 -- # : 1 00:15:39.811 06:40:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:39.811 06:40:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.811 06:40:35 -- scripts/common.sh@364 -- # decimal 1 00:15:39.811 06:40:35 -- scripts/common.sh@352 -- # local d=1 00:15:39.811 06:40:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.811 06:40:35 -- scripts/common.sh@354 -- # echo 1 00:15:39.811 06:40:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:39.811 06:40:35 -- scripts/common.sh@365 -- # decimal 2 00:15:39.811 06:40:35 -- scripts/common.sh@352 -- # local d=2 00:15:39.811 06:40:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.811 06:40:35 -- scripts/common.sh@354 -- # echo 2 00:15:39.811 06:40:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:39.811 06:40:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:39.811 06:40:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:39.811 06:40:35 -- scripts/common.sh@367 -- # return 0 00:15:39.811 06:40:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.811 06:40:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.811 --rc genhtml_branch_coverage=1 00:15:39.811 --rc genhtml_function_coverage=1 00:15:39.811 --rc genhtml_legend=1 00:15:39.811 --rc geninfo_all_blocks=1 00:15:39.811 --rc geninfo_unexecuted_blocks=1 00:15:39.811 00:15:39.811 ' 00:15:39.811 06:40:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.811 --rc genhtml_branch_coverage=1 00:15:39.811 --rc genhtml_function_coverage=1 00:15:39.811 --rc genhtml_legend=1 00:15:39.812 --rc geninfo_all_blocks=1 00:15:39.812 --rc geninfo_unexecuted_blocks=1 00:15:39.812 00:15:39.812 ' 00:15:39.812 06:40:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:39.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.812 --rc genhtml_branch_coverage=1 00:15:39.812 --rc genhtml_function_coverage=1 00:15:39.812 --rc genhtml_legend=1 00:15:39.812 --rc geninfo_all_blocks=1 00:15:39.812 --rc geninfo_unexecuted_blocks=1 00:15:39.812 00:15:39.812 ' 00:15:39.812 06:40:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:39.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.812 --rc genhtml_branch_coverage=1 00:15:39.812 --rc genhtml_function_coverage=1 00:15:39.812 --rc genhtml_legend=1 00:15:39.812 --rc geninfo_all_blocks=1 00:15:39.812 --rc geninfo_unexecuted_blocks=1 00:15:39.812 00:15:39.812 ' 00:15:39.812 06:40:35 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.812 06:40:35 -- nvmf/common.sh@7 -- # uname -s 00:15:39.812 06:40:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.812 06:40:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.812 06:40:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.812 06:40:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.812 06:40:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.812 06:40:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.812 06:40:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.812 06:40:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.812 06:40:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.812 06:40:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.812 06:40:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:15:39.812 06:40:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:15:39.812 06:40:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.812 06:40:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.812 06:40:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.812 06:40:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.812 06:40:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.812 06:40:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.812 06:40:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.812 06:40:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.812 06:40:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.812 06:40:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.812 06:40:35 -- paths/export.sh@5 -- # export PATH 00:15:39.812 06:40:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.812 06:40:35 -- nvmf/common.sh@46 -- # : 0 00:15:39.812 06:40:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:39.812 06:40:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:39.812 06:40:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:39.812 06:40:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.812 06:40:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.812 06:40:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:39.812 06:40:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:39.812 06:40:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:39.812 06:40:35 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:39.812 06:40:35 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:39.812 06:40:35 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:39.812 06:40:35 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:39.812 06:40:35 -- host/failover.sh@18 -- # nvmftestinit 00:15:39.812 06:40:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:39.812 06:40:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.812 06:40:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:39.812 06:40:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:39.812 06:40:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:39.812 06:40:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.812 06:40:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.812 06:40:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.812 06:40:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:39.812 06:40:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:39.812 06:40:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:39.812 06:40:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:39.812 06:40:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:39.812 06:40:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:39.812 06:40:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.812 06:40:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.812 06:40:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:39.812 06:40:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:39.812 06:40:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.812 06:40:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.812 06:40:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.812 06:40:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.812 06:40:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.812 06:40:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.812 06:40:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.812 06:40:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.812 06:40:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:39.812 06:40:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:39.812 Cannot find device "nvmf_tgt_br" 00:15:39.812 06:40:35 -- nvmf/common.sh@154 -- # true 00:15:39.812 06:40:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.812 Cannot find device "nvmf_tgt_br2" 00:15:39.812 06:40:35 -- nvmf/common.sh@155 -- # true 00:15:39.812 06:40:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:39.812 06:40:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:39.812 Cannot find device "nvmf_tgt_br" 00:15:39.812 06:40:35 -- nvmf/common.sh@157 -- # true 00:15:39.812 06:40:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:39.812 Cannot find device "nvmf_tgt_br2" 00:15:39.812 06:40:35 -- nvmf/common.sh@158 -- # true 00:15:39.812 06:40:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:40.070 06:40:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:40.070 06:40:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:40.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.070 06:40:35 -- nvmf/common.sh@161 -- # true 00:15:40.070 06:40:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:40.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.070 06:40:35 -- nvmf/common.sh@162 -- # true 00:15:40.070 06:40:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:40.070 06:40:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:40.070 06:40:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:40.070 06:40:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:40.070 06:40:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:40.070 06:40:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:40.070 06:40:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:40.070 06:40:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:40.070 06:40:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:40.070 06:40:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:40.070 06:40:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:40.070 06:40:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:40.070 06:40:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:40.070 06:40:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:40.070 06:40:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:40.070 06:40:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:40.070 06:40:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:40.070 06:40:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:40.070 06:40:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:40.070 06:40:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:40.070 06:40:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:40.070 06:40:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:40.070 06:40:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:40.070 06:40:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:40.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:15:40.070 00:15:40.070 --- 10.0.0.2 ping statistics --- 00:15:40.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.070 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:40.070 06:40:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:40.070 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:40.070 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:15:40.070 00:15:40.070 --- 10.0.0.3 ping statistics --- 00:15:40.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.070 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:15:40.070 06:40:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:40.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:40.070 00:15:40.070 --- 10.0.0.1 ping statistics --- 00:15:40.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.070 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:40.070 06:40:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.070 06:40:35 -- nvmf/common.sh@421 -- # return 0 00:15:40.070 06:40:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:40.070 06:40:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.070 06:40:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:40.070 06:40:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:40.070 06:40:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.070 06:40:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:40.070 06:40:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:40.335 06:40:35 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:40.335 06:40:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:40.335 06:40:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:40.335 06:40:35 -- common/autotest_common.sh@10 -- # set +x 00:15:40.335 06:40:35 -- nvmf/common.sh@469 -- # nvmfpid=81610 00:15:40.335 06:40:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:40.335 06:40:35 -- nvmf/common.sh@470 -- # waitforlisten 81610 00:15:40.335 06:40:35 -- common/autotest_common.sh@829 -- # '[' -z 81610 ']' 00:15:40.335 06:40:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.335 06:40:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.336 06:40:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.336 06:40:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.336 06:40:35 -- common/autotest_common.sh@10 -- # set +x 00:15:40.336 [2024-12-05 06:40:35.589285] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:40.336 [2024-12-05 06:40:35.589405] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.336 [2024-12-05 06:40:35.722247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:40.336 [2024-12-05 06:40:35.759530] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:40.336 [2024-12-05 06:40:35.759933] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.336 [2024-12-05 06:40:35.759987] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.336 [2024-12-05 06:40:35.760216] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.336 [2024-12-05 06:40:35.760310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.336 [2024-12-05 06:40:35.760831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.336 [2024-12-05 06:40:35.760881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.284 06:40:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.284 06:40:36 -- common/autotest_common.sh@862 -- # return 0 00:15:41.284 06:40:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:41.284 06:40:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:41.284 06:40:36 -- common/autotest_common.sh@10 -- # set +x 00:15:41.284 06:40:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.284 06:40:36 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:41.543 [2024-12-05 06:40:36.827824] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.543 06:40:36 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:41.802 Malloc0 00:15:41.802 06:40:37 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:42.061 06:40:37 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:42.320 06:40:37 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.579 [2024-12-05 06:40:37.820540] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.579 06:40:37 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:42.838 [2024-12-05 06:40:38.044715] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:42.838 06:40:38 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:42.838 [2024-12-05 06:40:38.264890] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:42.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:42.838 06:40:38 -- host/failover.sh@31 -- # bdevperf_pid=81669 00:15:42.838 06:40:38 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:42.838 06:40:38 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:42.838 06:40:38 -- host/failover.sh@34 -- # waitforlisten 81669 /var/tmp/bdevperf.sock 00:15:42.838 06:40:38 -- common/autotest_common.sh@829 -- # '[' -z 81669 ']' 00:15:42.838 06:40:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:42.838 06:40:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.838 06:40:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:42.838 06:40:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.838 06:40:38 -- common/autotest_common.sh@10 -- # set +x 00:15:44.215 06:40:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.215 06:40:39 -- common/autotest_common.sh@862 -- # return 0 00:15:44.215 06:40:39 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:44.215 NVMe0n1 00:15:44.215 06:40:39 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:44.474 00:15:44.474 06:40:39 -- host/failover.sh@39 -- # run_test_pid=81693 00:15:44.474 06:40:39 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:44.474 06:40:39 -- host/failover.sh@41 -- # sleep 1 00:15:45.850 06:40:40 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.850 [2024-12-05 06:40:41.178817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12022b0 is same with the state(5) to be set 00:15:45.850 [2024-12-05 06:40:41.178871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12022b0 is same with the state(5) to be set 00:15:45.850 [2024-12-05 06:40:41.178883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12022b0 is same with the state(5) to be set 00:15:45.850 [2024-12-05 06:40:41.178892] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12022b0 is same with the state(5) to be set 00:15:45.850 [2024-12-05 06:40:41.178900] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12022b0 is same with the state(5) to be set 00:15:45.850 [2024-12-05 06:40:41.178908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12022b0 is same with the state(5) to be set 00:15:45.850 [2024-12-05 06:40:41.178916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12022b0 is same with the state(5) to be set 00:15:45.850 [2024-12-05 06:40:41.178923] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12022b0 is same with the state(5) to be set 00:15:45.850 [2024-12-05 06:40:41.178931] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12022b0 is same with the state(5) to be set 00:15:45.850 [2024-12-05 06:40:41.178939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12022b0 is same with the state(5) to be set 00:15:45.850 [2024-12-05 06:40:41.178947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12022b0 is same with the state(5) to be set 00:15:45.850 [2024-12-05 06:40:41.178954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12022b0 is same with the state(5) to be set 00:15:45.850 06:40:41 -- host/failover.sh@45 -- # sleep 3 00:15:49.134 06:40:44 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:49.134 00:15:49.134 06:40:44 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:49.394 [2024-12-05 06:40:44.781957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104e6b0 is same with the state(5) to be set 00:15:49.394 [2024-12-05 06:40:44.782241] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104e6b0 is same with the state(5) to be set 00:15:49.394 [2024-12-05 06:40:44.782493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104e6b0 is same with the state(5) to be set 00:15:49.394 [2024-12-05 06:40:44.782648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104e6b0 is same with the state(5) to be set 00:15:49.394 [2024-12-05 06:40:44.782756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104e6b0 is same with the state(5) to be set 00:15:49.394 [2024-12-05 06:40:44.782815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104e6b0 is same with the state(5) to be set 00:15:49.394 [2024-12-05 06:40:44.782862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104e6b0 is same with the state(5) to be set 00:15:49.394 06:40:44 -- host/failover.sh@50 -- # sleep 3 00:15:52.679 06:40:47 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.679 [2024-12-05 06:40:48.039037] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.679 06:40:48 -- host/failover.sh@55 -- # sleep 1 00:15:53.616 06:40:49 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:53.876 [2024-12-05 06:40:49.309863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.309915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.309943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.309950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.309958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.309965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.309972] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.309981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.309988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.309995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310069] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310090] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310097] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310105] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310112] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310141] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310163] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310179] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 [2024-12-05 06:40:49.310193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5b20 is same with the state(5) to be set 00:15:53.876 06:40:49 -- host/failover.sh@59 -- # wait 81693 00:16:00.531 0 00:16:00.531 06:40:55 -- host/failover.sh@61 -- # killprocess 81669 00:16:00.531 06:40:55 -- common/autotest_common.sh@936 -- # '[' -z 81669 ']' 00:16:00.531 06:40:55 -- common/autotest_common.sh@940 -- # kill -0 81669 00:16:00.531 06:40:55 -- common/autotest_common.sh@941 -- # uname 00:16:00.531 06:40:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:00.531 06:40:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81669 00:16:00.531 killing process with pid 81669 00:16:00.531 06:40:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:00.531 06:40:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:00.531 06:40:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81669' 00:16:00.531 06:40:55 -- common/autotest_common.sh@955 -- # kill 81669 00:16:00.531 06:40:55 -- common/autotest_common.sh@960 -- # wait 81669 00:16:00.531 06:40:55 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:00.531 [2024-12-05 06:40:38.333462] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:00.531 [2024-12-05 06:40:38.333574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81669 ] 00:16:00.531 [2024-12-05 06:40:38.489367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.531 [2024-12-05 06:40:38.534696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.531 Running I/O for 15 seconds... 00:16:00.531 [2024-12-05 06:40:41.179015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.531 [2024-12-05 06:40:41.179545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.531 [2024-12-05 06:40:41.179644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.531 [2024-12-05 06:40:41.179674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:124312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.531 [2024-12-05 06:40:41.179703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.531 [2024-12-05 06:40:41.179791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.531 [2024-12-05 06:40:41.179919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.531 [2024-12-05 06:40:41.179934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.179948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.179963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.179977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.179993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.532 [2024-12-05 06:40:41.180157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.532 [2024-12-05 06:40:41.180186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.532 [2024-12-05 06:40:41.180252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.532 [2024-12-05 06:40:41.180282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.532 [2024-12-05 06:40:41.180388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.532 [2024-12-05 06:40:41.180417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.532 [2024-12-05 06:40:41.180475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.532 [2024-12-05 06:40:41.180538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.532 [2024-12-05 06:40:41.180567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.532 [2024-12-05 06:40:41.180596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.532 [2024-12-05 06:40:41.180696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.532 [2024-12-05 06:40:41.180725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.180979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.180993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.181009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.532 [2024-12-05 06:40:41.181029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.181046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.532 [2024-12-05 06:40:41.181060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.181075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.181089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.181105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.532 [2024-12-05 06:40:41.181121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.532 [2024-12-05 06:40:41.181138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.532 [2024-12-05 06:40:41.181152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.533 [2024-12-05 06:40:41.181210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.533 [2024-12-05 06:40:41.181240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.533 [2024-12-05 06:40:41.181342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.533 [2024-12-05 06:40:41.181683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.533 [2024-12-05 06:40:41.181714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.533 [2024-12-05 06:40:41.181743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.533 [2024-12-05 06:40:41.181772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.533 [2024-12-05 06:40:41.181809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.533 [2024-12-05 06:40:41.181839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.533 [2024-12-05 06:40:41.181870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.181976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.181990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.182005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.182020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.182035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.182049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.182064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.182078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.182094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.182109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.182125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.533 [2024-12-05 06:40:41.182139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.182154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.533 [2024-12-05 06:40:41.182169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.182191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.182205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.182221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.182235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.182250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.533 [2024-12-05 06:40:41.182264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.182279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.533 [2024-12-05 06:40:41.182293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.182309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.533 [2024-12-05 06:40:41.182334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.182352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.533 [2024-12-05 06:40:41.182366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.533 [2024-12-05 06:40:41.182381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.182395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.182424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.182454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.534 [2024-12-05 06:40:41.182486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.182515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.534 [2024-12-05 06:40:41.182545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.182581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.182614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.534 [2024-12-05 06:40:41.182644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.534 [2024-12-05 06:40:41.182673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.182703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.534 [2024-12-05 06:40:41.182733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.182762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.182792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.534 [2024-12-05 06:40:41.182821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.182851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.182880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.182909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.182939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.182978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.182994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.183007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.183023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:41.183037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.183052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1971a40 is same with the state(5) to be set 00:16:00.534 [2024-12-05 06:40:41.183069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.534 [2024-12-05 06:40:41.183079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.534 [2024-12-05 06:40:41.183092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124264 len:8 PRP1 0x0 PRP2 0x0 00:16:00.534 [2024-12-05 06:40:41.183105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.183152] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1971a40 was disconnected and freed. reset controller. 00:16:00.534 [2024-12-05 06:40:41.183171] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:00.534 [2024-12-05 06:40:41.183224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.534 [2024-12-05 06:40:41.183246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.183261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.534 [2024-12-05 06:40:41.183275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.183301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.534 [2024-12-05 06:40:41.183327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.183344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.534 [2024-12-05 06:40:41.183357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:41.183371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:00.534 [2024-12-05 06:40:41.183426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193dd40 (9): Bad file descriptor 00:16:00.534 [2024-12-05 06:40:41.185953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:00.534 [2024-12-05 06:40:41.215558] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:00.534 [2024-12-05 06:40:44.780349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.534 [2024-12-05 06:40:44.780431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:44.780487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.534 [2024-12-05 06:40:44.780503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:44.780517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.534 [2024-12-05 06:40:44.780530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:44.780544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.534 [2024-12-05 06:40:44.780556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:44.780569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193dd40 is same with the state(5) to be set 00:16:00.534 [2024-12-05 06:40:44.782993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:44.783024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:44.783047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:44.783062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.534 [2024-12-05 06:40:44.783078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.534 [2024-12-05 06:40:44.783091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.535 [2024-12-05 06:40:44.783256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.535 [2024-12-05 06:40:44.783674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.535 [2024-12-05 06:40:44.783826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.535 [2024-12-05 06:40:44.783881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.783978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.783991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.784006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.784019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.784034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.784047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.784062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.784081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.784097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.784110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.535 [2024-12-05 06:40:44.784125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.535 [2024-12-05 06:40:44.784138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.784220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.784248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.784389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.784418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.784446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.784482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.784511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.784540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.784870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.784898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.784925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.784953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.784980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.784995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.785009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.785023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.785036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.785050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.785063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.785078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.785090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.785105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.785118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.785132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.785145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.785160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.785173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.785193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.785207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.785222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.536 [2024-12-05 06:40:44.785235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.785250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.785262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.785277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.785290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.536 [2024-12-05 06:40:44.785304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.536 [2024-12-05 06:40:44.785321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.537 [2024-12-05 06:40:44.785585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.785885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.537 [2024-12-05 06:40:44.785913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.537 [2024-12-05 06:40:44.785941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.785956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.537 [2024-12-05 06:40:44.785993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.786023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.786051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.786079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.786107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.537 [2024-12-05 06:40:44.786136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.786164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.786193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.786221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.537 [2024-12-05 06:40:44.786249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.537 [2024-12-05 06:40:44.786277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.537 [2024-12-05 06:40:44.786308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.786336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.786383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.786412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.537 [2024-12-05 06:40:44.786441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.537 [2024-12-05 06:40:44.786470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.786499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.537 [2024-12-05 06:40:44.786527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.537 [2024-12-05 06:40:44.786542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.538 [2024-12-05 06:40:44.786556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:44.786571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:44.786584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:44.786599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:44.786613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:44.786628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:44.786641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:44.786657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:44.786670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:44.786685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:44.786699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:44.786714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:44.786735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:44.786751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:44.786764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:44.786779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:44.786795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:44.786810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:44.786824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:44.786839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:44.786852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:44.786867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195cc00 is same with the state(5) to be set 00:16:00.538 [2024-12-05 06:40:44.786882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.538 [2024-12-05 06:40:44.786892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.538 [2024-12-05 06:40:44.786903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127232 len:8 PRP1 0x0 PRP2 0x0 00:16:00.538 [2024-12-05 06:40:44.786916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:44.786961] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x195cc00 was disconnected and freed. reset controller. 00:16:00.538 [2024-12-05 06:40:44.786979] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:00.538 [2024-12-05 06:40:44.786994] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:00.538 [2024-12-05 06:40:44.789426] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:00.538 [2024-12-05 06:40:44.789463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193dd40 (9): Bad file descriptor 00:16:00.538 [2024-12-05 06:40:44.823121] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:00.538 [2024-12-05 06:40:49.310282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.310966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.310996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.311009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.311024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.311037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.311051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.311064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.311079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.311092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.311121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.311149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.538 [2024-12-05 06:40:49.311162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.538 [2024-12-05 06:40:49.311175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.311201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.311227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.311253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.311289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.311359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.311392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.311421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.311461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.311495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.311526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.311556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.311586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.311630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.311659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.311702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.311730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.311758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.311793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.311821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.311849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.311891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.311918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.311944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.311971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.311986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.312001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.312016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.312029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.312043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.312055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.312069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.312082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.312097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.312109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.312123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.312141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.312157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.312170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.312184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.312197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.312211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.312223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.312238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.312250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.312264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.312277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.312291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.539 [2024-12-05 06:40:49.312321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.312353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.312366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.312382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.312407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.312423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.539 [2024-12-05 06:40:49.312438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.539 [2024-12-05 06:40:49.312452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.312466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.312508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.312537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.312572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.312602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.312630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.312658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.312687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.312730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.312758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.312786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.312828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.312854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.312881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.312907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.312934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.312971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.312986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.313002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.313030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.313057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.313084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.313111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.313138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.313165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.313191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.313219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.313246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.313273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.313307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.313335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.313377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.313404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.313432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.540 [2024-12-05 06:40:49.313462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.313490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.540 [2024-12-05 06:40:49.313504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.540 [2024-12-05 06:40:49.313517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.313544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.313571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.313598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.313625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.313651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.313686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.313713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.313741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.541 [2024-12-05 06:40:49.313768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.541 [2024-12-05 06:40:49.313795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.313823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.541 [2024-12-05 06:40:49.313850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.313895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.313924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.541 [2024-12-05 06:40:49.313953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.541 [2024-12-05 06:40:49.313980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.313995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.314008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.314042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.314071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.314098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.314128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.541 [2024-12-05 06:40:49.314156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.314184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.314212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.314239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:00.541 [2024-12-05 06:40:49.314283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.314311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:00.541 [2024-12-05 06:40:49.314341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1940970 is same with the state(5) to be set 00:16:00.541 [2024-12-05 06:40:49.314384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:00.541 [2024-12-05 06:40:49.314396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:00.541 [2024-12-05 06:40:49.314410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103576 len:8 PRP1 0x0 PRP2 0x0 00:16:00.541 [2024-12-05 06:40:49.314423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314476] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1940970 was disconnected and freed. reset controller. 00:16:00.541 [2024-12-05 06:40:49.314494] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:00.541 [2024-12-05 06:40:49.314546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.541 [2024-12-05 06:40:49.314567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.541 [2024-12-05 06:40:49.314596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.541 [2024-12-05 06:40:49.314623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.541 [2024-12-05 06:40:49.314650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.541 [2024-12-05 06:40:49.314678] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:00.541 [2024-12-05 06:40:49.314723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193dd40 (9): Bad file descriptor 00:16:00.541 [2024-12-05 06:40:49.317192] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:00.541 [2024-12-05 06:40:49.348113] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:00.541 00:16:00.541 Latency(us) 00:16:00.541 [2024-12-05T06:40:56.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.541 [2024-12-05T06:40:56.007Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.541 Verification LBA range: start 0x0 length 0x4000 00:16:00.541 NVMe0n1 : 15.01 13465.21 52.60 290.28 0.00 9286.99 377.95 15192.44 00:16:00.541 [2024-12-05T06:40:56.007Z] =================================================================================================================== 00:16:00.541 [2024-12-05T06:40:56.007Z] Total : 13465.21 52.60 290.28 0.00 9286.99 377.95 15192.44 00:16:00.541 Received shutdown signal, test time was about 15.000000 seconds 00:16:00.541 00:16:00.542 Latency(us) 00:16:00.542 [2024-12-05T06:40:56.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.542 [2024-12-05T06:40:56.008Z] =================================================================================================================== 00:16:00.542 [2024-12-05T06:40:56.008Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:00.542 06:40:55 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:00.542 06:40:55 -- host/failover.sh@65 -- # count=3 00:16:00.542 06:40:55 -- host/failover.sh@67 -- # (( count != 3 )) 00:16:00.542 06:40:55 -- host/failover.sh@73 -- # bdevperf_pid=81871 00:16:00.542 06:40:55 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:00.542 06:40:55 -- host/failover.sh@75 -- # waitforlisten 81871 /var/tmp/bdevperf.sock 00:16:00.542 06:40:55 -- common/autotest_common.sh@829 -- # '[' -z 81871 ']' 00:16:00.542 06:40:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:00.542 06:40:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:00.542 06:40:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:00.542 06:40:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.542 06:40:55 -- common/autotest_common.sh@10 -- # set +x 00:16:00.800 06:40:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.800 06:40:56 -- common/autotest_common.sh@862 -- # return 0 00:16:00.800 06:40:56 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:01.059 [2024-12-05 06:40:56.494772] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:01.059 06:40:56 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:01.316 [2024-12-05 06:40:56.771099] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:01.574 06:40:56 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:01.832 NVMe0n1 00:16:01.832 06:40:57 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:02.091 00:16:02.091 06:40:57 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:02.350 00:16:02.350 06:40:57 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:02.350 06:40:57 -- host/failover.sh@82 -- # grep -q NVMe0 00:16:02.608 06:40:57 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:02.867 06:40:58 -- host/failover.sh@87 -- # sleep 3 00:16:06.164 06:41:01 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:06.164 06:41:01 -- host/failover.sh@88 -- # grep -q NVMe0 00:16:06.164 06:41:01 -- host/failover.sh@90 -- # run_test_pid=81948 00:16:06.164 06:41:01 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:06.164 06:41:01 -- host/failover.sh@92 -- # wait 81948 00:16:07.542 0 00:16:07.542 06:41:02 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:07.542 [2024-12-05 06:40:55.313914] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:07.542 [2024-12-05 06:40:55.314022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81871 ] 00:16:07.542 [2024-12-05 06:40:55.451724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.542 [2024-12-05 06:40:55.483278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.542 [2024-12-05 06:40:58.204002] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:07.542 [2024-12-05 06:40:58.204103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.542 [2024-12-05 06:40:58.204127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.542 [2024-12-05 06:40:58.204144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.542 [2024-12-05 06:40:58.204157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.542 [2024-12-05 06:40:58.204170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.542 [2024-12-05 06:40:58.204183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.542 [2024-12-05 06:40:58.204196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.542 [2024-12-05 06:40:58.204208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.542 [2024-12-05 06:40:58.204237] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:07.542 [2024-12-05 06:40:58.204284] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:07.542 [2024-12-05 06:40:58.204313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da8d40 (9): Bad file descriptor 00:16:07.542 [2024-12-05 06:40:58.206562] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:07.542 Running I/O for 1 seconds... 00:16:07.542 00:16:07.542 Latency(us) 00:16:07.542 [2024-12-05T06:41:03.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.542 [2024-12-05T06:41:03.008Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:07.542 Verification LBA range: start 0x0 length 0x4000 00:16:07.542 NVMe0n1 : 1.01 13710.60 53.56 0.00 0.00 9286.88 882.50 10664.49 00:16:07.542 [2024-12-05T06:41:03.008Z] =================================================================================================================== 00:16:07.542 [2024-12-05T06:41:03.008Z] Total : 13710.60 53.56 0.00 0.00 9286.88 882.50 10664.49 00:16:07.542 06:41:02 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:07.542 06:41:02 -- host/failover.sh@95 -- # grep -q NVMe0 00:16:07.543 06:41:02 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:07.801 06:41:03 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:07.801 06:41:03 -- host/failover.sh@99 -- # grep -q NVMe0 00:16:08.060 06:41:03 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:08.319 06:41:03 -- host/failover.sh@101 -- # sleep 3 00:16:11.634 06:41:06 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:11.634 06:41:06 -- host/failover.sh@103 -- # grep -q NVMe0 00:16:11.634 06:41:06 -- host/failover.sh@108 -- # killprocess 81871 00:16:11.634 06:41:06 -- common/autotest_common.sh@936 -- # '[' -z 81871 ']' 00:16:11.634 06:41:06 -- common/autotest_common.sh@940 -- # kill -0 81871 00:16:11.634 06:41:06 -- common/autotest_common.sh@941 -- # uname 00:16:11.634 06:41:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:11.634 06:41:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81871 00:16:11.634 killing process with pid 81871 00:16:11.634 06:41:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:11.634 06:41:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:11.634 06:41:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81871' 00:16:11.634 06:41:06 -- common/autotest_common.sh@955 -- # kill 81871 00:16:11.634 06:41:06 -- common/autotest_common.sh@960 -- # wait 81871 00:16:11.894 06:41:07 -- host/failover.sh@110 -- # sync 00:16:11.894 06:41:07 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:12.154 06:41:07 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:12.154 06:41:07 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:12.154 06:41:07 -- host/failover.sh@116 -- # nvmftestfini 00:16:12.154 06:41:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:12.154 06:41:07 -- nvmf/common.sh@116 -- # sync 00:16:12.154 06:41:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:12.154 06:41:07 -- nvmf/common.sh@119 -- # set +e 00:16:12.154 06:41:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:12.154 06:41:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:12.154 rmmod nvme_tcp 00:16:12.154 rmmod nvme_fabrics 00:16:12.154 rmmod nvme_keyring 00:16:12.154 06:41:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:12.154 06:41:07 -- nvmf/common.sh@123 -- # set -e 00:16:12.154 06:41:07 -- nvmf/common.sh@124 -- # return 0 00:16:12.154 06:41:07 -- nvmf/common.sh@477 -- # '[' -n 81610 ']' 00:16:12.154 06:41:07 -- nvmf/common.sh@478 -- # killprocess 81610 00:16:12.154 06:41:07 -- common/autotest_common.sh@936 -- # '[' -z 81610 ']' 00:16:12.154 06:41:07 -- common/autotest_common.sh@940 -- # kill -0 81610 00:16:12.154 06:41:07 -- common/autotest_common.sh@941 -- # uname 00:16:12.154 06:41:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:12.154 06:41:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81610 00:16:12.154 killing process with pid 81610 00:16:12.154 06:41:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:12.154 06:41:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:12.154 06:41:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81610' 00:16:12.154 06:41:07 -- common/autotest_common.sh@955 -- # kill 81610 00:16:12.154 06:41:07 -- common/autotest_common.sh@960 -- # wait 81610 00:16:12.413 06:41:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:12.413 06:41:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:12.413 06:41:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:12.413 06:41:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.413 06:41:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:12.413 06:41:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.413 06:41:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.413 06:41:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.413 06:41:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:12.413 00:16:12.413 real 0m32.713s 00:16:12.413 user 2m6.812s 00:16:12.413 sys 0m5.651s 00:16:12.413 06:41:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:12.413 06:41:07 -- common/autotest_common.sh@10 -- # set +x 00:16:12.413 ************************************ 00:16:12.413 END TEST nvmf_failover 00:16:12.413 ************************************ 00:16:12.413 06:41:07 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:12.413 06:41:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:12.413 06:41:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:12.413 06:41:07 -- common/autotest_common.sh@10 -- # set +x 00:16:12.413 ************************************ 00:16:12.413 START TEST nvmf_discovery 00:16:12.413 ************************************ 00:16:12.413 06:41:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:12.413 * Looking for test storage... 00:16:12.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:12.413 06:41:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:12.413 06:41:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:12.413 06:41:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:12.674 06:41:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:12.674 06:41:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:12.674 06:41:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:12.674 06:41:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:12.674 06:41:07 -- scripts/common.sh@335 -- # IFS=.-: 00:16:12.674 06:41:07 -- scripts/common.sh@335 -- # read -ra ver1 00:16:12.674 06:41:07 -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.674 06:41:07 -- scripts/common.sh@336 -- # read -ra ver2 00:16:12.674 06:41:07 -- scripts/common.sh@337 -- # local 'op=<' 00:16:12.674 06:41:07 -- scripts/common.sh@339 -- # ver1_l=2 00:16:12.674 06:41:07 -- scripts/common.sh@340 -- # ver2_l=1 00:16:12.674 06:41:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:12.674 06:41:07 -- scripts/common.sh@343 -- # case "$op" in 00:16:12.674 06:41:07 -- scripts/common.sh@344 -- # : 1 00:16:12.674 06:41:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:12.674 06:41:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.674 06:41:07 -- scripts/common.sh@364 -- # decimal 1 00:16:12.674 06:41:07 -- scripts/common.sh@352 -- # local d=1 00:16:12.674 06:41:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.674 06:41:07 -- scripts/common.sh@354 -- # echo 1 00:16:12.674 06:41:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:12.674 06:41:07 -- scripts/common.sh@365 -- # decimal 2 00:16:12.674 06:41:07 -- scripts/common.sh@352 -- # local d=2 00:16:12.674 06:41:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.674 06:41:07 -- scripts/common.sh@354 -- # echo 2 00:16:12.674 06:41:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:12.674 06:41:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:12.674 06:41:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:12.674 06:41:07 -- scripts/common.sh@367 -- # return 0 00:16:12.674 06:41:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.674 06:41:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:12.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.674 --rc genhtml_branch_coverage=1 00:16:12.674 --rc genhtml_function_coverage=1 00:16:12.674 --rc genhtml_legend=1 00:16:12.674 --rc geninfo_all_blocks=1 00:16:12.674 --rc geninfo_unexecuted_blocks=1 00:16:12.674 00:16:12.674 ' 00:16:12.674 06:41:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:12.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.674 --rc genhtml_branch_coverage=1 00:16:12.674 --rc genhtml_function_coverage=1 00:16:12.674 --rc genhtml_legend=1 00:16:12.674 --rc geninfo_all_blocks=1 00:16:12.674 --rc geninfo_unexecuted_blocks=1 00:16:12.674 00:16:12.674 ' 00:16:12.674 06:41:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:12.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.674 --rc genhtml_branch_coverage=1 00:16:12.674 --rc genhtml_function_coverage=1 00:16:12.674 --rc genhtml_legend=1 00:16:12.674 --rc geninfo_all_blocks=1 00:16:12.674 --rc geninfo_unexecuted_blocks=1 00:16:12.674 00:16:12.674 ' 00:16:12.674 06:41:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:12.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.674 --rc genhtml_branch_coverage=1 00:16:12.674 --rc genhtml_function_coverage=1 00:16:12.674 --rc genhtml_legend=1 00:16:12.674 --rc geninfo_all_blocks=1 00:16:12.674 --rc geninfo_unexecuted_blocks=1 00:16:12.674 00:16:12.674 ' 00:16:12.674 06:41:07 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.674 06:41:07 -- nvmf/common.sh@7 -- # uname -s 00:16:12.674 06:41:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.674 06:41:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.674 06:41:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.674 06:41:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.674 06:41:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.674 06:41:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.674 06:41:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.674 06:41:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.674 06:41:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.674 06:41:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.674 06:41:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:16:12.674 06:41:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:16:12.674 06:41:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.674 06:41:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.674 06:41:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:12.674 06:41:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.674 06:41:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.674 06:41:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.674 06:41:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.674 06:41:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.674 06:41:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.674 06:41:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.674 06:41:07 -- paths/export.sh@5 -- # export PATH 00:16:12.674 06:41:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.674 06:41:07 -- nvmf/common.sh@46 -- # : 0 00:16:12.674 06:41:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:12.674 06:41:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:12.674 06:41:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:12.674 06:41:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.674 06:41:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.674 06:41:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:12.674 06:41:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:12.674 06:41:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:12.674 06:41:07 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:12.674 06:41:07 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:12.674 06:41:07 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:12.674 06:41:07 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:12.674 06:41:07 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:12.674 06:41:07 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:12.674 06:41:07 -- host/discovery.sh@25 -- # nvmftestinit 00:16:12.674 06:41:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:12.674 06:41:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.674 06:41:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:12.674 06:41:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:12.674 06:41:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:12.674 06:41:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.674 06:41:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.674 06:41:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.674 06:41:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:12.674 06:41:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:12.674 06:41:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:12.674 06:41:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:12.674 06:41:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:12.674 06:41:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:12.674 06:41:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.674 06:41:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.674 06:41:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:12.674 06:41:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:12.674 06:41:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:12.674 06:41:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:12.674 06:41:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:12.674 06:41:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.674 06:41:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:12.674 06:41:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:12.674 06:41:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:12.674 06:41:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:12.674 06:41:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:12.674 06:41:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:12.674 Cannot find device "nvmf_tgt_br" 00:16:12.674 06:41:08 -- nvmf/common.sh@154 -- # true 00:16:12.674 06:41:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.675 Cannot find device "nvmf_tgt_br2" 00:16:12.675 06:41:08 -- nvmf/common.sh@155 -- # true 00:16:12.675 06:41:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:12.675 06:41:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:12.675 Cannot find device "nvmf_tgt_br" 00:16:12.675 06:41:08 -- nvmf/common.sh@157 -- # true 00:16:12.675 06:41:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:12.675 Cannot find device "nvmf_tgt_br2" 00:16:12.675 06:41:08 -- nvmf/common.sh@158 -- # true 00:16:12.675 06:41:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:12.675 06:41:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:12.675 06:41:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:12.675 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.675 06:41:08 -- nvmf/common.sh@161 -- # true 00:16:12.675 06:41:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:12.675 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.675 06:41:08 -- nvmf/common.sh@162 -- # true 00:16:12.675 06:41:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:12.675 06:41:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:12.934 06:41:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:12.934 06:41:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:12.934 06:41:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:12.934 06:41:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:12.934 06:41:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:12.934 06:41:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:12.934 06:41:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:12.934 06:41:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:12.934 06:41:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:12.934 06:41:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:12.934 06:41:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:12.934 06:41:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:12.934 06:41:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:12.934 06:41:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:12.934 06:41:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:12.934 06:41:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:12.934 06:41:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:12.934 06:41:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:12.934 06:41:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:12.934 06:41:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:12.934 06:41:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:12.934 06:41:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:12.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:16:12.934 00:16:12.934 --- 10.0.0.2 ping statistics --- 00:16:12.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.934 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:12.934 06:41:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:12.934 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:12.934 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:16:12.934 00:16:12.934 --- 10.0.0.3 ping statistics --- 00:16:12.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.934 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:12.934 06:41:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:12.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:12.934 00:16:12.934 --- 10.0.0.1 ping statistics --- 00:16:12.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.934 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:12.935 06:41:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.935 06:41:08 -- nvmf/common.sh@421 -- # return 0 00:16:12.935 06:41:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:12.935 06:41:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.935 06:41:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:12.935 06:41:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:12.935 06:41:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.935 06:41:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:12.935 06:41:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:12.935 06:41:08 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:12.935 06:41:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:12.935 06:41:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:12.935 06:41:08 -- common/autotest_common.sh@10 -- # set +x 00:16:12.935 06:41:08 -- nvmf/common.sh@469 -- # nvmfpid=82218 00:16:12.935 06:41:08 -- nvmf/common.sh@470 -- # waitforlisten 82218 00:16:12.935 06:41:08 -- common/autotest_common.sh@829 -- # '[' -z 82218 ']' 00:16:12.935 06:41:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:12.935 06:41:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.935 06:41:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.935 06:41:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.935 06:41:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:12.935 06:41:08 -- common/autotest_common.sh@10 -- # set +x 00:16:12.935 [2024-12-05 06:41:08.373265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:12.935 [2024-12-05 06:41:08.373432] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.194 [2024-12-05 06:41:08.512545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.194 [2024-12-05 06:41:08.553242] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:13.194 [2024-12-05 06:41:08.553458] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.194 [2024-12-05 06:41:08.553475] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.194 [2024-12-05 06:41:08.553498] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.194 [2024-12-05 06:41:08.553534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.131 06:41:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.131 06:41:09 -- common/autotest_common.sh@862 -- # return 0 00:16:14.131 06:41:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:14.131 06:41:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.131 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:16:14.131 06:41:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.131 06:41:09 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.131 06:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.131 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:16:14.131 [2024-12-05 06:41:09.421761] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.131 06:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.131 06:41:09 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:14.131 06:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.131 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:16:14.131 [2024-12-05 06:41:09.429866] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:14.131 06:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.131 06:41:09 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:14.131 06:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.131 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:16:14.131 null0 00:16:14.131 06:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.131 06:41:09 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:14.131 06:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.131 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:16:14.131 null1 00:16:14.131 06:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.131 06:41:09 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:14.131 06:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.131 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:16:14.131 06:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.131 06:41:09 -- host/discovery.sh@45 -- # hostpid=82256 00:16:14.131 06:41:09 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:14.131 06:41:09 -- host/discovery.sh@46 -- # waitforlisten 82256 /tmp/host.sock 00:16:14.131 06:41:09 -- common/autotest_common.sh@829 -- # '[' -z 82256 ']' 00:16:14.131 06:41:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:14.131 06:41:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.131 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:14.131 06:41:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:14.131 06:41:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.131 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:16:14.131 [2024-12-05 06:41:09.508965] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:14.131 [2024-12-05 06:41:09.509082] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82256 ] 00:16:14.390 [2024-12-05 06:41:09.646273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.390 [2024-12-05 06:41:09.694815] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:14.390 [2024-12-05 06:41:09.695087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.390 06:41:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.390 06:41:09 -- common/autotest_common.sh@862 -- # return 0 00:16:14.390 06:41:09 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:14.390 06:41:09 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:14.390 06:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.390 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:16:14.390 06:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.390 06:41:09 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:14.390 06:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.390 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:16:14.390 06:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.390 06:41:09 -- host/discovery.sh@72 -- # notify_id=0 00:16:14.390 06:41:09 -- host/discovery.sh@78 -- # get_subsystem_names 00:16:14.390 06:41:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:14.390 06:41:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:14.390 06:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.390 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:16:14.390 06:41:09 -- host/discovery.sh@59 -- # xargs 00:16:14.390 06:41:09 -- host/discovery.sh@59 -- # sort 00:16:14.390 06:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.649 06:41:09 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:16:14.649 06:41:09 -- host/discovery.sh@79 -- # get_bdev_list 00:16:14.649 06:41:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:14.649 06:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.649 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:16:14.649 06:41:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:14.649 06:41:09 -- host/discovery.sh@55 -- # sort 00:16:14.649 06:41:09 -- host/discovery.sh@55 -- # xargs 00:16:14.649 06:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.649 06:41:09 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:16:14.649 06:41:09 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:14.649 06:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.649 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:16:14.649 06:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.649 06:41:09 -- host/discovery.sh@82 -- # get_subsystem_names 00:16:14.649 06:41:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:14.649 06:41:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:14.649 06:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.649 06:41:09 -- host/discovery.sh@59 -- # sort 00:16:14.649 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:16:14.649 06:41:09 -- host/discovery.sh@59 -- # xargs 00:16:14.649 06:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.649 06:41:09 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:16:14.649 06:41:09 -- host/discovery.sh@83 -- # get_bdev_list 00:16:14.649 06:41:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:14.649 06:41:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:14.649 06:41:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.649 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:16:14.649 06:41:09 -- host/discovery.sh@55 -- # sort 00:16:14.649 06:41:09 -- host/discovery.sh@55 -- # xargs 00:16:14.649 06:41:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.649 06:41:10 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:14.649 06:41:10 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:14.649 06:41:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.649 06:41:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.649 06:41:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.649 06:41:10 -- host/discovery.sh@86 -- # get_subsystem_names 00:16:14.649 06:41:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:14.649 06:41:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.649 06:41:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.649 06:41:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:14.649 06:41:10 -- host/discovery.sh@59 -- # sort 00:16:14.649 06:41:10 -- host/discovery.sh@59 -- # xargs 00:16:14.649 06:41:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.649 06:41:10 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:16:14.649 06:41:10 -- host/discovery.sh@87 -- # get_bdev_list 00:16:14.649 06:41:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:14.649 06:41:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:14.649 06:41:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.649 06:41:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.649 06:41:10 -- host/discovery.sh@55 -- # sort 00:16:14.649 06:41:10 -- host/discovery.sh@55 -- # xargs 00:16:14.649 06:41:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.908 06:41:10 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:14.908 06:41:10 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:14.908 06:41:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.908 06:41:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.908 [2024-12-05 06:41:10.154073] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.908 06:41:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.908 06:41:10 -- host/discovery.sh@92 -- # get_subsystem_names 00:16:14.908 06:41:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:14.908 06:41:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:14.908 06:41:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.908 06:41:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.908 06:41:10 -- host/discovery.sh@59 -- # sort 00:16:14.908 06:41:10 -- host/discovery.sh@59 -- # xargs 00:16:14.908 06:41:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.908 06:41:10 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:14.908 06:41:10 -- host/discovery.sh@93 -- # get_bdev_list 00:16:14.908 06:41:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:14.908 06:41:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:14.908 06:41:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.908 06:41:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.908 06:41:10 -- host/discovery.sh@55 -- # sort 00:16:14.908 06:41:10 -- host/discovery.sh@55 -- # xargs 00:16:14.908 06:41:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.908 06:41:10 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:16:14.908 06:41:10 -- host/discovery.sh@94 -- # get_notification_count 00:16:14.908 06:41:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:14.908 06:41:10 -- host/discovery.sh@74 -- # jq '. | length' 00:16:14.908 06:41:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.908 06:41:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.908 06:41:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.908 06:41:10 -- host/discovery.sh@74 -- # notification_count=0 00:16:14.908 06:41:10 -- host/discovery.sh@75 -- # notify_id=0 00:16:14.908 06:41:10 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:16:14.908 06:41:10 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:14.908 06:41:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.908 06:41:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.908 06:41:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.908 06:41:10 -- host/discovery.sh@100 -- # sleep 1 00:16:15.475 [2024-12-05 06:41:10.804813] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:15.475 [2024-12-05 06:41:10.804843] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:15.475 [2024-12-05 06:41:10.804861] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:15.475 [2024-12-05 06:41:10.810860] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:15.475 [2024-12-05 06:41:10.866505] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:15.475 [2024-12-05 06:41:10.866557] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:16.042 06:41:11 -- host/discovery.sh@101 -- # get_subsystem_names 00:16:16.042 06:41:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:16.042 06:41:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:16.042 06:41:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.042 06:41:11 -- host/discovery.sh@59 -- # sort 00:16:16.042 06:41:11 -- common/autotest_common.sh@10 -- # set +x 00:16:16.042 06:41:11 -- host/discovery.sh@59 -- # xargs 00:16:16.042 06:41:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.042 06:41:11 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.042 06:41:11 -- host/discovery.sh@102 -- # get_bdev_list 00:16:16.042 06:41:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.042 06:41:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:16.042 06:41:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.042 06:41:11 -- host/discovery.sh@55 -- # sort 00:16:16.042 06:41:11 -- common/autotest_common.sh@10 -- # set +x 00:16:16.042 06:41:11 -- host/discovery.sh@55 -- # xargs 00:16:16.042 06:41:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.042 06:41:11 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:16.042 06:41:11 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:16:16.042 06:41:11 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:16.042 06:41:11 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:16.042 06:41:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.042 06:41:11 -- host/discovery.sh@63 -- # sort -n 00:16:16.042 06:41:11 -- common/autotest_common.sh@10 -- # set +x 00:16:16.042 06:41:11 -- host/discovery.sh@63 -- # xargs 00:16:16.042 06:41:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.042 06:41:11 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:16:16.042 06:41:11 -- host/discovery.sh@104 -- # get_notification_count 00:16:16.042 06:41:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:16.042 06:41:11 -- host/discovery.sh@74 -- # jq '. | length' 00:16:16.042 06:41:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.042 06:41:11 -- common/autotest_common.sh@10 -- # set +x 00:16:16.042 06:41:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.300 06:41:11 -- host/discovery.sh@74 -- # notification_count=1 00:16:16.300 06:41:11 -- host/discovery.sh@75 -- # notify_id=1 00:16:16.300 06:41:11 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:16:16.300 06:41:11 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:16.300 06:41:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.300 06:41:11 -- common/autotest_common.sh@10 -- # set +x 00:16:16.300 06:41:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.300 06:41:11 -- host/discovery.sh@109 -- # sleep 1 00:16:17.236 06:41:12 -- host/discovery.sh@110 -- # get_bdev_list 00:16:17.236 06:41:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.236 06:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.236 06:41:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:17.236 06:41:12 -- common/autotest_common.sh@10 -- # set +x 00:16:17.236 06:41:12 -- host/discovery.sh@55 -- # sort 00:16:17.236 06:41:12 -- host/discovery.sh@55 -- # xargs 00:16:17.236 06:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.236 06:41:12 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:17.236 06:41:12 -- host/discovery.sh@111 -- # get_notification_count 00:16:17.236 06:41:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:17.236 06:41:12 -- host/discovery.sh@74 -- # jq '. | length' 00:16:17.236 06:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.236 06:41:12 -- common/autotest_common.sh@10 -- # set +x 00:16:17.236 06:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.236 06:41:12 -- host/discovery.sh@74 -- # notification_count=1 00:16:17.236 06:41:12 -- host/discovery.sh@75 -- # notify_id=2 00:16:17.236 06:41:12 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:16:17.236 06:41:12 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:17.236 06:41:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.236 06:41:12 -- common/autotest_common.sh@10 -- # set +x 00:16:17.236 [2024-12-05 06:41:12.661299] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:17.236 [2024-12-05 06:41:12.662017] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:17.236 [2024-12-05 06:41:12.662057] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:17.236 06:41:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.236 06:41:12 -- host/discovery.sh@117 -- # sleep 1 00:16:17.236 [2024-12-05 06:41:12.668004] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:17.495 [2024-12-05 06:41:12.728298] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:17.495 [2024-12-05 06:41:12.728365] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:17.495 [2024-12-05 06:41:12.728373] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:18.431 06:41:13 -- host/discovery.sh@118 -- # get_subsystem_names 00:16:18.431 06:41:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:18.431 06:41:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:18.431 06:41:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.431 06:41:13 -- common/autotest_common.sh@10 -- # set +x 00:16:18.431 06:41:13 -- host/discovery.sh@59 -- # sort 00:16:18.431 06:41:13 -- host/discovery.sh@59 -- # xargs 00:16:18.431 06:41:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.431 06:41:13 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.431 06:41:13 -- host/discovery.sh@119 -- # get_bdev_list 00:16:18.431 06:41:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.431 06:41:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:18.431 06:41:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.431 06:41:13 -- host/discovery.sh@55 -- # sort 00:16:18.431 06:41:13 -- common/autotest_common.sh@10 -- # set +x 00:16:18.431 06:41:13 -- host/discovery.sh@55 -- # xargs 00:16:18.431 06:41:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.431 06:41:13 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:18.431 06:41:13 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:16:18.431 06:41:13 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:18.431 06:41:13 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:18.431 06:41:13 -- host/discovery.sh@63 -- # sort -n 00:16:18.431 06:41:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.431 06:41:13 -- common/autotest_common.sh@10 -- # set +x 00:16:18.431 06:41:13 -- host/discovery.sh@63 -- # xargs 00:16:18.431 06:41:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.431 06:41:13 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:18.431 06:41:13 -- host/discovery.sh@121 -- # get_notification_count 00:16:18.431 06:41:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:18.431 06:41:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.431 06:41:13 -- host/discovery.sh@74 -- # jq '. | length' 00:16:18.431 06:41:13 -- common/autotest_common.sh@10 -- # set +x 00:16:18.431 06:41:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.431 06:41:13 -- host/discovery.sh@74 -- # notification_count=0 00:16:18.431 06:41:13 -- host/discovery.sh@75 -- # notify_id=2 00:16:18.431 06:41:13 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:16:18.431 06:41:13 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:18.431 06:41:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.431 06:41:13 -- common/autotest_common.sh@10 -- # set +x 00:16:18.431 [2024-12-05 06:41:13.891757] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:18.431 [2024-12-05 06:41:13.891808] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:18.431 [2024-12-05 06:41:13.894522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:18.431 [2024-12-05 06:41:13.894566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.431 [2024-12-05 06:41:13.894587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:18.431 [2024-12-05 06:41:13.894603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.431 [2024-12-05 06:41:13.894620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:18.431 [2024-12-05 06:41:13.894631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.431 [2024-12-05 06:41:13.894640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:18.431 [2024-12-05 06:41:13.894649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.431 [2024-12-05 06:41:13.894658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2251150 is same with the state(5) to be set 00:16:18.431 06:41:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.690 06:41:13 -- host/discovery.sh@127 -- # sleep 1 00:16:18.690 [2024-12-05 06:41:13.897752] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:18.690 [2024-12-05 06:41:13.897802] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:18.690 [2024-12-05 06:41:13.897860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2251150 (9): Bad file descriptor 00:16:19.626 06:41:14 -- host/discovery.sh@128 -- # get_subsystem_names 00:16:19.627 06:41:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:19.627 06:41:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:19.627 06:41:14 -- host/discovery.sh@59 -- # sort 00:16:19.627 06:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.627 06:41:14 -- common/autotest_common.sh@10 -- # set +x 00:16:19.627 06:41:14 -- host/discovery.sh@59 -- # xargs 00:16:19.627 06:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.627 06:41:14 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.627 06:41:14 -- host/discovery.sh@129 -- # get_bdev_list 00:16:19.627 06:41:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.627 06:41:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.627 06:41:14 -- host/discovery.sh@55 -- # xargs 00:16:19.627 06:41:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.627 06:41:14 -- host/discovery.sh@55 -- # sort 00:16:19.627 06:41:14 -- common/autotest_common.sh@10 -- # set +x 00:16:19.627 06:41:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.627 06:41:15 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:19.627 06:41:15 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:16:19.627 06:41:15 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:19.627 06:41:15 -- host/discovery.sh@63 -- # sort -n 00:16:19.627 06:41:15 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:19.627 06:41:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.627 06:41:15 -- common/autotest_common.sh@10 -- # set +x 00:16:19.627 06:41:15 -- host/discovery.sh@63 -- # xargs 00:16:19.627 06:41:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.627 06:41:15 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:16:19.627 06:41:15 -- host/discovery.sh@131 -- # get_notification_count 00:16:19.627 06:41:15 -- host/discovery.sh@74 -- # jq '. | length' 00:16:19.627 06:41:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:19.627 06:41:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.627 06:41:15 -- common/autotest_common.sh@10 -- # set +x 00:16:19.627 06:41:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.885 06:41:15 -- host/discovery.sh@74 -- # notification_count=0 00:16:19.885 06:41:15 -- host/discovery.sh@75 -- # notify_id=2 00:16:19.885 06:41:15 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:16:19.885 06:41:15 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:19.885 06:41:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.885 06:41:15 -- common/autotest_common.sh@10 -- # set +x 00:16:19.885 06:41:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.885 06:41:15 -- host/discovery.sh@135 -- # sleep 1 00:16:20.820 06:41:16 -- host/discovery.sh@136 -- # get_subsystem_names 00:16:20.820 06:41:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:20.820 06:41:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:20.820 06:41:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.820 06:41:16 -- common/autotest_common.sh@10 -- # set +x 00:16:20.820 06:41:16 -- host/discovery.sh@59 -- # sort 00:16:20.820 06:41:16 -- host/discovery.sh@59 -- # xargs 00:16:20.820 06:41:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.820 06:41:16 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:16:20.820 06:41:16 -- host/discovery.sh@137 -- # get_bdev_list 00:16:20.820 06:41:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.820 06:41:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.820 06:41:16 -- common/autotest_common.sh@10 -- # set +x 00:16:20.820 06:41:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:20.820 06:41:16 -- host/discovery.sh@55 -- # sort 00:16:20.820 06:41:16 -- host/discovery.sh@55 -- # xargs 00:16:20.820 06:41:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.820 06:41:16 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:16:20.820 06:41:16 -- host/discovery.sh@138 -- # get_notification_count 00:16:20.820 06:41:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:20.820 06:41:16 -- host/discovery.sh@74 -- # jq '. | length' 00:16:20.820 06:41:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.820 06:41:16 -- common/autotest_common.sh@10 -- # set +x 00:16:20.820 06:41:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.079 06:41:16 -- host/discovery.sh@74 -- # notification_count=2 00:16:21.079 06:41:16 -- host/discovery.sh@75 -- # notify_id=4 00:16:21.079 06:41:16 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:16:21.079 06:41:16 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:21.079 06:41:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.079 06:41:16 -- common/autotest_common.sh@10 -- # set +x 00:16:22.019 [2024-12-05 06:41:17.317266] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:22.019 [2024-12-05 06:41:17.317308] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:22.019 [2024-12-05 06:41:17.317350] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:22.019 [2024-12-05 06:41:17.323319] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:22.019 [2024-12-05 06:41:17.382758] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:22.019 [2024-12-05 06:41:17.382825] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:22.019 06:41:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.019 06:41:17 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:22.019 06:41:17 -- common/autotest_common.sh@650 -- # local es=0 00:16:22.019 06:41:17 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:22.019 06:41:17 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:22.019 06:41:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.019 06:41:17 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:22.019 06:41:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.019 06:41:17 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:22.019 06:41:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.019 06:41:17 -- common/autotest_common.sh@10 -- # set +x 00:16:22.019 request: 00:16:22.019 { 00:16:22.019 "name": "nvme", 00:16:22.019 "trtype": "tcp", 00:16:22.019 "traddr": "10.0.0.2", 00:16:22.019 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:22.019 "adrfam": "ipv4", 00:16:22.019 "trsvcid": "8009", 00:16:22.019 "wait_for_attach": true, 00:16:22.019 "method": "bdev_nvme_start_discovery", 00:16:22.019 "req_id": 1 00:16:22.019 } 00:16:22.019 Got JSON-RPC error response 00:16:22.019 response: 00:16:22.019 { 00:16:22.019 "code": -17, 00:16:22.019 "message": "File exists" 00:16:22.019 } 00:16:22.019 06:41:17 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:22.019 06:41:17 -- common/autotest_common.sh@653 -- # es=1 00:16:22.019 06:41:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.019 06:41:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.019 06:41:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.019 06:41:17 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:16:22.019 06:41:17 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:22.019 06:41:17 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:22.019 06:41:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.019 06:41:17 -- host/discovery.sh@67 -- # sort 00:16:22.019 06:41:17 -- common/autotest_common.sh@10 -- # set +x 00:16:22.019 06:41:17 -- host/discovery.sh@67 -- # xargs 00:16:22.019 06:41:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.019 06:41:17 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:16:22.019 06:41:17 -- host/discovery.sh@147 -- # get_bdev_list 00:16:22.019 06:41:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:22.019 06:41:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.019 06:41:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.019 06:41:17 -- common/autotest_common.sh@10 -- # set +x 00:16:22.019 06:41:17 -- host/discovery.sh@55 -- # sort 00:16:22.019 06:41:17 -- host/discovery.sh@55 -- # xargs 00:16:22.279 06:41:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.279 06:41:17 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:22.279 06:41:17 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:22.279 06:41:17 -- common/autotest_common.sh@650 -- # local es=0 00:16:22.279 06:41:17 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:22.279 06:41:17 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:22.279 06:41:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.279 06:41:17 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:22.279 06:41:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.279 06:41:17 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:22.279 06:41:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.279 06:41:17 -- common/autotest_common.sh@10 -- # set +x 00:16:22.279 request: 00:16:22.279 { 00:16:22.279 "name": "nvme_second", 00:16:22.279 "trtype": "tcp", 00:16:22.279 "traddr": "10.0.0.2", 00:16:22.279 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:22.279 "adrfam": "ipv4", 00:16:22.279 "trsvcid": "8009", 00:16:22.279 "wait_for_attach": true, 00:16:22.279 "method": "bdev_nvme_start_discovery", 00:16:22.279 "req_id": 1 00:16:22.279 } 00:16:22.279 Got JSON-RPC error response 00:16:22.279 response: 00:16:22.279 { 00:16:22.279 "code": -17, 00:16:22.279 "message": "File exists" 00:16:22.279 } 00:16:22.279 06:41:17 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:22.279 06:41:17 -- common/autotest_common.sh@653 -- # es=1 00:16:22.279 06:41:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.279 06:41:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.279 06:41:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.279 06:41:17 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:16:22.279 06:41:17 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:22.279 06:41:17 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:22.279 06:41:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.279 06:41:17 -- common/autotest_common.sh@10 -- # set +x 00:16:22.279 06:41:17 -- host/discovery.sh@67 -- # xargs 00:16:22.279 06:41:17 -- host/discovery.sh@67 -- # sort 00:16:22.279 06:41:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.279 06:41:17 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:16:22.279 06:41:17 -- host/discovery.sh@153 -- # get_bdev_list 00:16:22.280 06:41:17 -- host/discovery.sh@55 -- # sort 00:16:22.280 06:41:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:22.280 06:41:17 -- host/discovery.sh@55 -- # xargs 00:16:22.280 06:41:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.280 06:41:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.280 06:41:17 -- common/autotest_common.sh@10 -- # set +x 00:16:22.280 06:41:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.280 06:41:17 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:22.280 06:41:17 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:22.280 06:41:17 -- common/autotest_common.sh@650 -- # local es=0 00:16:22.280 06:41:17 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:22.280 06:41:17 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:22.280 06:41:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.280 06:41:17 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:22.280 06:41:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.280 06:41:17 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:22.280 06:41:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.280 06:41:17 -- common/autotest_common.sh@10 -- # set +x 00:16:23.216 [2024-12-05 06:41:18.636647] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.216 [2024-12-05 06:41:18.636772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.216 [2024-12-05 06:41:18.636836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.216 [2024-12-05 06:41:18.636853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2292350 with addr=10.0.0.2, port=8010 00:16:23.216 [2024-12-05 06:41:18.636872] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:23.216 [2024-12-05 06:41:18.636881] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:23.216 [2024-12-05 06:41:18.636890] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:24.592 [2024-12-05 06:41:19.636629] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:24.592 [2024-12-05 06:41:19.636762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:24.592 [2024-12-05 06:41:19.636804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:24.592 [2024-12-05 06:41:19.636820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2292350 with addr=10.0.0.2, port=8010 00:16:24.593 [2024-12-05 06:41:19.636836] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:24.593 [2024-12-05 06:41:19.636846] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:24.593 [2024-12-05 06:41:19.636854] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:25.529 [2024-12-05 06:41:20.636510] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:25.529 request: 00:16:25.529 { 00:16:25.529 "name": "nvme_second", 00:16:25.529 "trtype": "tcp", 00:16:25.529 "traddr": "10.0.0.2", 00:16:25.529 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:25.529 "adrfam": "ipv4", 00:16:25.529 "trsvcid": "8010", 00:16:25.529 "attach_timeout_ms": 3000, 00:16:25.529 "method": "bdev_nvme_start_discovery", 00:16:25.529 "req_id": 1 00:16:25.529 } 00:16:25.529 Got JSON-RPC error response 00:16:25.529 response: 00:16:25.529 { 00:16:25.529 "code": -110, 00:16:25.529 "message": "Connection timed out" 00:16:25.529 } 00:16:25.529 06:41:20 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:25.529 06:41:20 -- common/autotest_common.sh@653 -- # es=1 00:16:25.529 06:41:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:25.529 06:41:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:25.529 06:41:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:25.529 06:41:20 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:16:25.529 06:41:20 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:25.529 06:41:20 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:25.529 06:41:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.529 06:41:20 -- host/discovery.sh@67 -- # sort 00:16:25.529 06:41:20 -- host/discovery.sh@67 -- # xargs 00:16:25.529 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:16:25.529 06:41:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.529 06:41:20 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:16:25.529 06:41:20 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:16:25.529 06:41:20 -- host/discovery.sh@162 -- # kill 82256 00:16:25.529 06:41:20 -- host/discovery.sh@163 -- # nvmftestfini 00:16:25.529 06:41:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:25.529 06:41:20 -- nvmf/common.sh@116 -- # sync 00:16:25.529 06:41:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:25.529 06:41:20 -- nvmf/common.sh@119 -- # set +e 00:16:25.529 06:41:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:25.529 06:41:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:25.529 rmmod nvme_tcp 00:16:25.529 rmmod nvme_fabrics 00:16:25.529 rmmod nvme_keyring 00:16:25.529 06:41:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:25.529 06:41:20 -- nvmf/common.sh@123 -- # set -e 00:16:25.529 06:41:20 -- nvmf/common.sh@124 -- # return 0 00:16:25.529 06:41:20 -- nvmf/common.sh@477 -- # '[' -n 82218 ']' 00:16:25.529 06:41:20 -- nvmf/common.sh@478 -- # killprocess 82218 00:16:25.529 06:41:20 -- common/autotest_common.sh@936 -- # '[' -z 82218 ']' 00:16:25.529 06:41:20 -- common/autotest_common.sh@940 -- # kill -0 82218 00:16:25.529 06:41:20 -- common/autotest_common.sh@941 -- # uname 00:16:25.529 06:41:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:25.529 06:41:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82218 00:16:25.529 killing process with pid 82218 00:16:25.529 06:41:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:25.529 06:41:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:25.529 06:41:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82218' 00:16:25.529 06:41:20 -- common/autotest_common.sh@955 -- # kill 82218 00:16:25.529 06:41:20 -- common/autotest_common.sh@960 -- # wait 82218 00:16:25.529 06:41:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:25.529 06:41:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:25.529 06:41:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:25.529 06:41:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.529 06:41:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:25.529 06:41:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.529 06:41:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.529 06:41:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.788 06:41:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:25.788 00:16:25.788 real 0m13.237s 00:16:25.788 user 0m25.146s 00:16:25.788 sys 0m2.170s 00:16:25.788 06:41:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:25.788 ************************************ 00:16:25.788 END TEST nvmf_discovery 00:16:25.788 ************************************ 00:16:25.788 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:16:25.788 06:41:21 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:25.788 06:41:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:25.788 06:41:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.788 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:16:25.788 ************************************ 00:16:25.788 START TEST nvmf_discovery_remove_ifc 00:16:25.788 ************************************ 00:16:25.788 06:41:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:25.788 * Looking for test storage... 00:16:25.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:25.788 06:41:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:25.788 06:41:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:25.788 06:41:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:25.788 06:41:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:25.788 06:41:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:25.788 06:41:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:25.788 06:41:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:25.788 06:41:21 -- scripts/common.sh@335 -- # IFS=.-: 00:16:25.788 06:41:21 -- scripts/common.sh@335 -- # read -ra ver1 00:16:25.788 06:41:21 -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.788 06:41:21 -- scripts/common.sh@336 -- # read -ra ver2 00:16:25.788 06:41:21 -- scripts/common.sh@337 -- # local 'op=<' 00:16:25.788 06:41:21 -- scripts/common.sh@339 -- # ver1_l=2 00:16:25.788 06:41:21 -- scripts/common.sh@340 -- # ver2_l=1 00:16:25.788 06:41:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:25.788 06:41:21 -- scripts/common.sh@343 -- # case "$op" in 00:16:25.788 06:41:21 -- scripts/common.sh@344 -- # : 1 00:16:25.788 06:41:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:25.788 06:41:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.788 06:41:21 -- scripts/common.sh@364 -- # decimal 1 00:16:25.788 06:41:21 -- scripts/common.sh@352 -- # local d=1 00:16:25.788 06:41:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.788 06:41:21 -- scripts/common.sh@354 -- # echo 1 00:16:25.788 06:41:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:25.788 06:41:21 -- scripts/common.sh@365 -- # decimal 2 00:16:25.788 06:41:21 -- scripts/common.sh@352 -- # local d=2 00:16:25.788 06:41:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.788 06:41:21 -- scripts/common.sh@354 -- # echo 2 00:16:25.788 06:41:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:25.788 06:41:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:25.788 06:41:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:25.788 06:41:21 -- scripts/common.sh@367 -- # return 0 00:16:25.788 06:41:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.788 06:41:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:25.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.788 --rc genhtml_branch_coverage=1 00:16:25.788 --rc genhtml_function_coverage=1 00:16:25.788 --rc genhtml_legend=1 00:16:25.788 --rc geninfo_all_blocks=1 00:16:25.788 --rc geninfo_unexecuted_blocks=1 00:16:25.788 00:16:25.788 ' 00:16:25.788 06:41:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:25.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.788 --rc genhtml_branch_coverage=1 00:16:25.788 --rc genhtml_function_coverage=1 00:16:25.788 --rc genhtml_legend=1 00:16:25.788 --rc geninfo_all_blocks=1 00:16:25.788 --rc geninfo_unexecuted_blocks=1 00:16:25.788 00:16:25.788 ' 00:16:25.788 06:41:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:25.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.788 --rc genhtml_branch_coverage=1 00:16:25.788 --rc genhtml_function_coverage=1 00:16:25.788 --rc genhtml_legend=1 00:16:25.788 --rc geninfo_all_blocks=1 00:16:25.788 --rc geninfo_unexecuted_blocks=1 00:16:25.788 00:16:25.788 ' 00:16:25.788 06:41:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:25.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.788 --rc genhtml_branch_coverage=1 00:16:25.788 --rc genhtml_function_coverage=1 00:16:25.788 --rc genhtml_legend=1 00:16:25.788 --rc geninfo_all_blocks=1 00:16:25.788 --rc geninfo_unexecuted_blocks=1 00:16:25.788 00:16:25.788 ' 00:16:25.788 06:41:21 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:25.788 06:41:21 -- nvmf/common.sh@7 -- # uname -s 00:16:25.788 06:41:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.788 06:41:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.788 06:41:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.788 06:41:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.788 06:41:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.788 06:41:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.788 06:41:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.788 06:41:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.788 06:41:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.788 06:41:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.788 06:41:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:16:25.788 06:41:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:16:25.788 06:41:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.788 06:41:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.788 06:41:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:25.788 06:41:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:25.788 06:41:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.789 06:41:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.789 06:41:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.789 06:41:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.789 06:41:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.789 06:41:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.789 06:41:21 -- paths/export.sh@5 -- # export PATH 00:16:25.789 06:41:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.789 06:41:21 -- nvmf/common.sh@46 -- # : 0 00:16:25.789 06:41:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:26.047 06:41:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:26.047 06:41:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:26.047 06:41:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.047 06:41:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.047 06:41:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:26.047 06:41:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:26.047 06:41:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:26.047 06:41:21 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:26.047 06:41:21 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:26.047 06:41:21 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:26.047 06:41:21 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:26.047 06:41:21 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:26.047 06:41:21 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:26.047 06:41:21 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:26.047 06:41:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:26.047 06:41:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.047 06:41:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:26.047 06:41:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:26.047 06:41:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:26.047 06:41:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.047 06:41:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.047 06:41:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.047 06:41:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:26.047 06:41:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:26.047 06:41:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:26.047 06:41:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:26.047 06:41:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:26.047 06:41:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:26.047 06:41:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.047 06:41:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.047 06:41:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:26.047 06:41:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:26.047 06:41:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:26.047 06:41:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:26.047 06:41:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:26.047 06:41:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.047 06:41:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:26.047 06:41:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:26.047 06:41:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:26.047 06:41:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:26.047 06:41:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:26.047 06:41:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:26.047 Cannot find device "nvmf_tgt_br" 00:16:26.047 06:41:21 -- nvmf/common.sh@154 -- # true 00:16:26.047 06:41:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.047 Cannot find device "nvmf_tgt_br2" 00:16:26.047 06:41:21 -- nvmf/common.sh@155 -- # true 00:16:26.047 06:41:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:26.047 06:41:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:26.047 Cannot find device "nvmf_tgt_br" 00:16:26.047 06:41:21 -- nvmf/common.sh@157 -- # true 00:16:26.047 06:41:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:26.047 Cannot find device "nvmf_tgt_br2" 00:16:26.047 06:41:21 -- nvmf/common.sh@158 -- # true 00:16:26.047 06:41:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:26.047 06:41:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:26.047 06:41:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.047 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.047 06:41:21 -- nvmf/common.sh@161 -- # true 00:16:26.047 06:41:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.047 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.047 06:41:21 -- nvmf/common.sh@162 -- # true 00:16:26.047 06:41:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:26.047 06:41:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:26.047 06:41:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:26.047 06:41:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:26.047 06:41:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:26.047 06:41:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:26.047 06:41:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:26.047 06:41:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:26.047 06:41:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:26.047 06:41:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:26.047 06:41:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:26.047 06:41:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:26.047 06:41:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:26.047 06:41:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:26.047 06:41:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:26.047 06:41:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:26.047 06:41:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:26.047 06:41:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:26.047 06:41:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:26.305 06:41:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:26.305 06:41:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:26.305 06:41:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:26.305 06:41:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:26.305 06:41:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:26.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:16:26.305 00:16:26.305 --- 10.0.0.2 ping statistics --- 00:16:26.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.305 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:26.305 06:41:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:26.305 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:26.305 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:16:26.305 00:16:26.305 --- 10.0.0.3 ping statistics --- 00:16:26.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.305 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:26.305 06:41:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:26.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:16:26.305 00:16:26.305 --- 10.0.0.1 ping statistics --- 00:16:26.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.305 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:16:26.305 06:41:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.305 06:41:21 -- nvmf/common.sh@421 -- # return 0 00:16:26.305 06:41:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:26.306 06:41:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.306 06:41:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:26.306 06:41:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:26.306 06:41:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.306 06:41:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:26.306 06:41:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:26.306 06:41:21 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:26.306 06:41:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:26.306 06:41:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:26.306 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:16:26.306 06:41:21 -- nvmf/common.sh@469 -- # nvmfpid=82742 00:16:26.306 06:41:21 -- nvmf/common.sh@470 -- # waitforlisten 82742 00:16:26.306 06:41:21 -- common/autotest_common.sh@829 -- # '[' -z 82742 ']' 00:16:26.306 06:41:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.306 06:41:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:26.306 06:41:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.306 06:41:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.306 06:41:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.306 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:16:26.306 [2024-12-05 06:41:21.650217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:26.306 [2024-12-05 06:41:21.650342] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.563 [2024-12-05 06:41:21.789517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.563 [2024-12-05 06:41:21.829697] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:26.563 [2024-12-05 06:41:21.829874] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.563 [2024-12-05 06:41:21.829891] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.563 [2024-12-05 06:41:21.829901] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.563 [2024-12-05 06:41:21.829937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.494 06:41:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.494 06:41:22 -- common/autotest_common.sh@862 -- # return 0 00:16:27.494 06:41:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:27.494 06:41:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:27.494 06:41:22 -- common/autotest_common.sh@10 -- # set +x 00:16:27.494 06:41:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.494 06:41:22 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:27.494 06:41:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.494 06:41:22 -- common/autotest_common.sh@10 -- # set +x 00:16:27.494 [2024-12-05 06:41:22.717525] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.494 [2024-12-05 06:41:22.725643] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:27.494 null0 00:16:27.494 [2024-12-05 06:41:22.757653] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.494 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:27.494 06:41:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.494 06:41:22 -- host/discovery_remove_ifc.sh@59 -- # hostpid=82780 00:16:27.494 06:41:22 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:27.494 06:41:22 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 82780 /tmp/host.sock 00:16:27.494 06:41:22 -- common/autotest_common.sh@829 -- # '[' -z 82780 ']' 00:16:27.494 06:41:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:27.494 06:41:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.494 06:41:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:27.494 06:41:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.494 06:41:22 -- common/autotest_common.sh@10 -- # set +x 00:16:27.494 [2024-12-05 06:41:22.830722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:27.494 [2024-12-05 06:41:22.831279] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82780 ] 00:16:27.752 [2024-12-05 06:41:22.972009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.752 [2024-12-05 06:41:23.013462] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:27.752 [2024-12-05 06:41:23.013946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.315 06:41:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.315 06:41:23 -- common/autotest_common.sh@862 -- # return 0 00:16:28.315 06:41:23 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:28.315 06:41:23 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:28.316 06:41:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.316 06:41:23 -- common/autotest_common.sh@10 -- # set +x 00:16:28.573 06:41:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.573 06:41:23 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:28.573 06:41:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.573 06:41:23 -- common/autotest_common.sh@10 -- # set +x 00:16:28.573 06:41:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.573 06:41:23 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:28.573 06:41:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.573 06:41:23 -- common/autotest_common.sh@10 -- # set +x 00:16:29.507 [2024-12-05 06:41:24.850574] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:29.507 [2024-12-05 06:41:24.850615] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:29.507 [2024-12-05 06:41:24.850633] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:29.507 [2024-12-05 06:41:24.856617] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:29.507 [2024-12-05 06:41:24.912562] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:29.507 [2024-12-05 06:41:24.912775] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:29.507 [2024-12-05 06:41:24.912845] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:29.507 [2024-12-05 06:41:24.912981] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:29.507 [2024-12-05 06:41:24.913078] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:29.507 06:41:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.507 06:41:24 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:29.507 06:41:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:29.507 [2024-12-05 06:41:24.919326] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1bddaf0 was disconnected and freed. delete nvme_qpair. 00:16:29.507 06:41:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.507 06:41:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:29.507 06:41:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.507 06:41:24 -- common/autotest_common.sh@10 -- # set +x 00:16:29.507 06:41:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:29.507 06:41:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:29.507 06:41:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.765 06:41:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:29.765 06:41:24 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:29.765 06:41:24 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:29.765 06:41:24 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:29.765 06:41:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:29.765 06:41:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.765 06:41:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.765 06:41:24 -- common/autotest_common.sh@10 -- # set +x 00:16:29.765 06:41:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:29.765 06:41:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:29.765 06:41:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:29.765 06:41:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.765 06:41:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:29.765 06:41:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:30.696 06:41:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:30.696 06:41:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.696 06:41:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.696 06:41:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:30.696 06:41:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:30.696 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:16:30.696 06:41:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:30.696 06:41:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.696 06:41:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:30.696 06:41:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:32.066 06:41:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.066 06:41:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.066 06:41:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.066 06:41:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.066 06:41:27 -- common/autotest_common.sh@10 -- # set +x 00:16:32.066 06:41:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.066 06:41:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.066 06:41:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.066 06:41:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:32.066 06:41:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:32.999 06:41:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.999 06:41:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.999 06:41:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.999 06:41:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.999 06:41:28 -- common/autotest_common.sh@10 -- # set +x 00:16:32.999 06:41:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.999 06:41:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.999 06:41:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.999 06:41:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:32.999 06:41:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:33.932 06:41:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:33.932 06:41:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:33.932 06:41:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.932 06:41:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.932 06:41:29 -- common/autotest_common.sh@10 -- # set +x 00:16:33.932 06:41:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:33.932 06:41:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:33.932 06:41:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.932 06:41:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:33.932 06:41:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:34.882 06:41:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:34.882 06:41:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:34.882 06:41:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.882 06:41:30 -- common/autotest_common.sh@10 -- # set +x 00:16:34.883 06:41:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:34.883 06:41:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:34.883 06:41:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:34.883 06:41:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.883 [2024-12-05 06:41:30.340328] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:34.883 [2024-12-05 06:41:30.340663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.883 [2024-12-05 06:41:30.340684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.883 [2024-12-05 06:41:30.340698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.883 [2024-12-05 06:41:30.340707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.883 [2024-12-05 06:41:30.340717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.883 [2024-12-05 06:41:30.340726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.883 [2024-12-05 06:41:30.340736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.883 [2024-12-05 06:41:30.340746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.883 [2024-12-05 06:41:30.340757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.883 [2024-12-05 06:41:30.340766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.883 [2024-12-05 06:41:30.340776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2890 is same with the state(5) to be set 00:16:35.141 [2024-12-05 06:41:30.350325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba2890 (9): Bad file descriptor 00:16:35.141 [2024-12-05 06:41:30.360345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:35.141 06:41:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:35.142 06:41:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:36.077 06:41:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:36.077 06:41:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:36.077 06:41:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:36.077 06:41:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.077 06:41:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:36.077 06:41:31 -- common/autotest_common.sh@10 -- # set +x 00:16:36.077 06:41:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:36.077 [2024-12-05 06:41:31.389369] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:37.012 [2024-12-05 06:41:32.411438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:38.386 [2024-12-05 06:41:33.435443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:38.386 [2024-12-05 06:41:33.435580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba2890 with addr=10.0.0.2, port=4420 00:16:38.386 [2024-12-05 06:41:33.435617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2890 is same with the state(5) to be set 00:16:38.386 [2024-12-05 06:41:33.435684] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:38.386 [2024-12-05 06:41:33.435707] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:38.386 [2024-12-05 06:41:33.435727] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:38.386 [2024-12-05 06:41:33.435747] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:38.386 [2024-12-05 06:41:33.436558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba2890 (9): Bad file descriptor 00:16:38.386 [2024-12-05 06:41:33.436623] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:38.386 [2024-12-05 06:41:33.436674] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:38.386 [2024-12-05 06:41:33.436746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.386 [2024-12-05 06:41:33.436777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.386 [2024-12-05 06:41:33.436805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.386 [2024-12-05 06:41:33.436825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.386 [2024-12-05 06:41:33.436846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.386 [2024-12-05 06:41:33.436867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.386 [2024-12-05 06:41:33.436888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.386 [2024-12-05 06:41:33.436908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.386 [2024-12-05 06:41:33.436930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.386 [2024-12-05 06:41:33.436950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.386 [2024-12-05 06:41:33.436969] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:38.386 [2024-12-05 06:41:33.437030] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ef0 (9): Bad file descriptor 00:16:38.386 [2024-12-05 06:41:33.438030] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:38.386 [2024-12-05 06:41:33.438077] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:38.386 06:41:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.386 06:41:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:38.386 06:41:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:39.319 06:41:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.319 06:41:34 -- common/autotest_common.sh@10 -- # set +x 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:39.319 06:41:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:39.319 06:41:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:39.319 06:41:34 -- common/autotest_common.sh@10 -- # set +x 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:39.319 06:41:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:39.319 06:41:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:40.250 [2024-12-05 06:41:35.445853] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:40.250 [2024-12-05 06:41:35.445890] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:40.250 [2024-12-05 06:41:35.445906] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:40.250 [2024-12-05 06:41:35.451887] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:40.250 [2024-12-05 06:41:35.506933] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:40.250 [2024-12-05 06:41:35.507136] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:40.250 [2024-12-05 06:41:35.507200] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:40.250 [2024-12-05 06:41:35.507366] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:40.250 [2024-12-05 06:41:35.507438] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:40.250 [2024-12-05 06:41:35.514455] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b91e30 was disconnected and freed. delete nvme_qpair. 00:16:40.250 06:41:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:40.250 06:41:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:40.250 06:41:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.250 06:41:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:40.250 06:41:35 -- common/autotest_common.sh@10 -- # set +x 00:16:40.250 06:41:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:40.250 06:41:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:40.250 06:41:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.250 06:41:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:40.250 06:41:35 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:40.250 06:41:35 -- host/discovery_remove_ifc.sh@90 -- # killprocess 82780 00:16:40.250 06:41:35 -- common/autotest_common.sh@936 -- # '[' -z 82780 ']' 00:16:40.250 06:41:35 -- common/autotest_common.sh@940 -- # kill -0 82780 00:16:40.250 06:41:35 -- common/autotest_common.sh@941 -- # uname 00:16:40.250 06:41:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.250 06:41:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82780 00:16:40.509 killing process with pid 82780 00:16:40.509 06:41:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:40.509 06:41:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:40.509 06:41:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82780' 00:16:40.509 06:41:35 -- common/autotest_common.sh@955 -- # kill 82780 00:16:40.509 06:41:35 -- common/autotest_common.sh@960 -- # wait 82780 00:16:40.509 06:41:35 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:40.509 06:41:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:40.509 06:41:35 -- nvmf/common.sh@116 -- # sync 00:16:40.509 06:41:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:40.509 06:41:35 -- nvmf/common.sh@119 -- # set +e 00:16:40.509 06:41:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:40.509 06:41:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:40.509 rmmod nvme_tcp 00:16:40.768 rmmod nvme_fabrics 00:16:40.768 rmmod nvme_keyring 00:16:40.768 06:41:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:40.768 06:41:36 -- nvmf/common.sh@123 -- # set -e 00:16:40.768 06:41:36 -- nvmf/common.sh@124 -- # return 0 00:16:40.768 06:41:36 -- nvmf/common.sh@477 -- # '[' -n 82742 ']' 00:16:40.768 06:41:36 -- nvmf/common.sh@478 -- # killprocess 82742 00:16:40.768 06:41:36 -- common/autotest_common.sh@936 -- # '[' -z 82742 ']' 00:16:40.768 06:41:36 -- common/autotest_common.sh@940 -- # kill -0 82742 00:16:40.768 06:41:36 -- common/autotest_common.sh@941 -- # uname 00:16:40.768 06:41:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.768 06:41:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82742 00:16:40.768 killing process with pid 82742 00:16:40.768 06:41:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:40.768 06:41:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:40.768 06:41:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82742' 00:16:40.768 06:41:36 -- common/autotest_common.sh@955 -- # kill 82742 00:16:40.768 06:41:36 -- common/autotest_common.sh@960 -- # wait 82742 00:16:40.768 06:41:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:40.768 06:41:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:40.768 06:41:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:40.768 06:41:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:40.768 06:41:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:40.768 06:41:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.768 06:41:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.768 06:41:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.768 06:41:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:40.768 00:16:40.768 real 0m15.168s 00:16:40.768 user 0m24.447s 00:16:40.768 sys 0m2.426s 00:16:40.768 06:41:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:40.768 06:41:36 -- common/autotest_common.sh@10 -- # set +x 00:16:40.768 ************************************ 00:16:40.768 END TEST nvmf_discovery_remove_ifc 00:16:40.768 ************************************ 00:16:41.027 06:41:36 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:16:41.027 06:41:36 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:41.027 06:41:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:41.027 06:41:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:41.027 06:41:36 -- common/autotest_common.sh@10 -- # set +x 00:16:41.027 ************************************ 00:16:41.027 START TEST nvmf_digest 00:16:41.027 ************************************ 00:16:41.027 06:41:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:41.027 * Looking for test storage... 00:16:41.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:41.027 06:41:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:41.027 06:41:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:41.027 06:41:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:41.027 06:41:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:41.027 06:41:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:41.027 06:41:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:41.027 06:41:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:41.027 06:41:36 -- scripts/common.sh@335 -- # IFS=.-: 00:16:41.027 06:41:36 -- scripts/common.sh@335 -- # read -ra ver1 00:16:41.027 06:41:36 -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.027 06:41:36 -- scripts/common.sh@336 -- # read -ra ver2 00:16:41.027 06:41:36 -- scripts/common.sh@337 -- # local 'op=<' 00:16:41.027 06:41:36 -- scripts/common.sh@339 -- # ver1_l=2 00:16:41.027 06:41:36 -- scripts/common.sh@340 -- # ver2_l=1 00:16:41.027 06:41:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:41.027 06:41:36 -- scripts/common.sh@343 -- # case "$op" in 00:16:41.027 06:41:36 -- scripts/common.sh@344 -- # : 1 00:16:41.027 06:41:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:41.027 06:41:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.027 06:41:36 -- scripts/common.sh@364 -- # decimal 1 00:16:41.027 06:41:36 -- scripts/common.sh@352 -- # local d=1 00:16:41.027 06:41:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.027 06:41:36 -- scripts/common.sh@354 -- # echo 1 00:16:41.027 06:41:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:41.027 06:41:36 -- scripts/common.sh@365 -- # decimal 2 00:16:41.027 06:41:36 -- scripts/common.sh@352 -- # local d=2 00:16:41.027 06:41:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.027 06:41:36 -- scripts/common.sh@354 -- # echo 2 00:16:41.027 06:41:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:41.027 06:41:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:41.027 06:41:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:41.027 06:41:36 -- scripts/common.sh@367 -- # return 0 00:16:41.027 06:41:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.027 06:41:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:41.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.027 --rc genhtml_branch_coverage=1 00:16:41.027 --rc genhtml_function_coverage=1 00:16:41.027 --rc genhtml_legend=1 00:16:41.027 --rc geninfo_all_blocks=1 00:16:41.027 --rc geninfo_unexecuted_blocks=1 00:16:41.027 00:16:41.027 ' 00:16:41.027 06:41:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:41.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.027 --rc genhtml_branch_coverage=1 00:16:41.027 --rc genhtml_function_coverage=1 00:16:41.027 --rc genhtml_legend=1 00:16:41.027 --rc geninfo_all_blocks=1 00:16:41.027 --rc geninfo_unexecuted_blocks=1 00:16:41.027 00:16:41.027 ' 00:16:41.027 06:41:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:41.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.027 --rc genhtml_branch_coverage=1 00:16:41.027 --rc genhtml_function_coverage=1 00:16:41.027 --rc genhtml_legend=1 00:16:41.027 --rc geninfo_all_blocks=1 00:16:41.027 --rc geninfo_unexecuted_blocks=1 00:16:41.027 00:16:41.027 ' 00:16:41.027 06:41:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:41.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.027 --rc genhtml_branch_coverage=1 00:16:41.027 --rc genhtml_function_coverage=1 00:16:41.027 --rc genhtml_legend=1 00:16:41.027 --rc geninfo_all_blocks=1 00:16:41.027 --rc geninfo_unexecuted_blocks=1 00:16:41.027 00:16:41.027 ' 00:16:41.027 06:41:36 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:41.027 06:41:36 -- nvmf/common.sh@7 -- # uname -s 00:16:41.027 06:41:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.027 06:41:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.027 06:41:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.027 06:41:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.027 06:41:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.027 06:41:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.027 06:41:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.027 06:41:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.027 06:41:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.027 06:41:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.027 06:41:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:16:41.027 06:41:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:16:41.027 06:41:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.027 06:41:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.027 06:41:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:41.027 06:41:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:41.027 06:41:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.027 06:41:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.027 06:41:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.027 06:41:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.027 06:41:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.027 06:41:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.027 06:41:36 -- paths/export.sh@5 -- # export PATH 00:16:41.027 06:41:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.027 06:41:36 -- nvmf/common.sh@46 -- # : 0 00:16:41.027 06:41:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:41.027 06:41:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:41.027 06:41:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:41.027 06:41:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.027 06:41:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.027 06:41:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:41.027 06:41:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:41.027 06:41:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:41.027 06:41:36 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:41.027 06:41:36 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:41.027 06:41:36 -- host/digest.sh@16 -- # runtime=2 00:16:41.027 06:41:36 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:16:41.027 06:41:36 -- host/digest.sh@132 -- # nvmftestinit 00:16:41.027 06:41:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:41.027 06:41:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.028 06:41:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:41.028 06:41:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:41.028 06:41:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:41.028 06:41:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.028 06:41:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.028 06:41:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.286 06:41:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:41.286 06:41:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:41.286 06:41:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:41.286 06:41:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:41.286 06:41:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:41.286 06:41:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:41.286 06:41:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.286 06:41:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.286 06:41:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:41.286 06:41:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:41.286 06:41:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:41.286 06:41:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:41.286 06:41:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:41.286 06:41:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.286 06:41:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:41.286 06:41:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:41.286 06:41:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:41.286 06:41:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:41.286 06:41:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:41.286 06:41:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:41.286 Cannot find device "nvmf_tgt_br" 00:16:41.286 06:41:36 -- nvmf/common.sh@154 -- # true 00:16:41.286 06:41:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:41.286 Cannot find device "nvmf_tgt_br2" 00:16:41.286 06:41:36 -- nvmf/common.sh@155 -- # true 00:16:41.286 06:41:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:41.286 06:41:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:41.286 Cannot find device "nvmf_tgt_br" 00:16:41.286 06:41:36 -- nvmf/common.sh@157 -- # true 00:16:41.286 06:41:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:41.286 Cannot find device "nvmf_tgt_br2" 00:16:41.286 06:41:36 -- nvmf/common.sh@158 -- # true 00:16:41.286 06:41:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:41.286 06:41:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:41.286 06:41:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:41.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.286 06:41:36 -- nvmf/common.sh@161 -- # true 00:16:41.286 06:41:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:41.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.286 06:41:36 -- nvmf/common.sh@162 -- # true 00:16:41.286 06:41:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:41.286 06:41:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:41.286 06:41:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:41.286 06:41:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:41.286 06:41:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:41.286 06:41:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:41.286 06:41:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:41.286 06:41:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:41.286 06:41:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:41.286 06:41:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:41.286 06:41:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:41.286 06:41:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:41.545 06:41:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:41.545 06:41:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:41.545 06:41:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:41.545 06:41:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:41.545 06:41:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:41.545 06:41:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:41.545 06:41:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:41.545 06:41:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:41.545 06:41:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:41.545 06:41:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:41.545 06:41:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:41.545 06:41:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:41.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:16:41.545 00:16:41.545 --- 10.0.0.2 ping statistics --- 00:16:41.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.545 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:41.545 06:41:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:41.545 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:41.545 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:16:41.545 00:16:41.545 --- 10.0.0.3 ping statistics --- 00:16:41.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.545 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:41.545 06:41:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:41.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:41.545 00:16:41.545 --- 10.0.0.1 ping statistics --- 00:16:41.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.545 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:41.545 06:41:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.545 06:41:36 -- nvmf/common.sh@421 -- # return 0 00:16:41.545 06:41:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:41.545 06:41:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.545 06:41:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:41.545 06:41:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:41.545 06:41:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.545 06:41:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:41.545 06:41:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:41.545 06:41:36 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:41.545 06:41:36 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:16:41.545 06:41:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:41.545 06:41:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:41.545 06:41:36 -- common/autotest_common.sh@10 -- # set +x 00:16:41.545 ************************************ 00:16:41.545 START TEST nvmf_digest_clean 00:16:41.545 ************************************ 00:16:41.545 06:41:36 -- common/autotest_common.sh@1114 -- # run_digest 00:16:41.545 06:41:36 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:16:41.545 06:41:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:41.545 06:41:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:41.545 06:41:36 -- common/autotest_common.sh@10 -- # set +x 00:16:41.545 06:41:36 -- nvmf/common.sh@469 -- # nvmfpid=83197 00:16:41.545 06:41:36 -- nvmf/common.sh@470 -- # waitforlisten 83197 00:16:41.545 06:41:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:41.545 06:41:36 -- common/autotest_common.sh@829 -- # '[' -z 83197 ']' 00:16:41.545 06:41:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.545 06:41:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.545 06:41:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.545 06:41:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.545 06:41:36 -- common/autotest_common.sh@10 -- # set +x 00:16:41.545 [2024-12-05 06:41:36.922861] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:41.545 [2024-12-05 06:41:36.922927] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.805 [2024-12-05 06:41:37.054625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.805 [2024-12-05 06:41:37.094904] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:41.805 [2024-12-05 06:41:37.095079] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.805 [2024-12-05 06:41:37.095095] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.805 [2024-12-05 06:41:37.095105] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.805 [2024-12-05 06:41:37.095134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.805 06:41:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.805 06:41:37 -- common/autotest_common.sh@862 -- # return 0 00:16:41.805 06:41:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:41.805 06:41:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:41.805 06:41:37 -- common/autotest_common.sh@10 -- # set +x 00:16:41.805 06:41:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.805 06:41:37 -- host/digest.sh@120 -- # common_target_config 00:16:41.805 06:41:37 -- host/digest.sh@43 -- # rpc_cmd 00:16:41.805 06:41:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.805 06:41:37 -- common/autotest_common.sh@10 -- # set +x 00:16:41.805 null0 00:16:41.805 [2024-12-05 06:41:37.268499] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.064 [2024-12-05 06:41:37.292618] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.064 06:41:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.064 06:41:37 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:16:42.064 06:41:37 -- host/digest.sh@77 -- # local rw bs qd 00:16:42.064 06:41:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:42.064 06:41:37 -- host/digest.sh@80 -- # rw=randread 00:16:42.064 06:41:37 -- host/digest.sh@80 -- # bs=4096 00:16:42.064 06:41:37 -- host/digest.sh@80 -- # qd=128 00:16:42.064 06:41:37 -- host/digest.sh@82 -- # bperfpid=83227 00:16:42.065 06:41:37 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:42.065 06:41:37 -- host/digest.sh@83 -- # waitforlisten 83227 /var/tmp/bperf.sock 00:16:42.065 06:41:37 -- common/autotest_common.sh@829 -- # '[' -z 83227 ']' 00:16:42.065 06:41:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:42.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:42.065 06:41:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:42.065 06:41:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:42.065 06:41:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:42.065 06:41:37 -- common/autotest_common.sh@10 -- # set +x 00:16:42.065 [2024-12-05 06:41:37.347981] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:42.065 [2024-12-05 06:41:37.348084] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83227 ] 00:16:42.065 [2024-12-05 06:41:37.485977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.065 [2024-12-05 06:41:37.518828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.323 06:41:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.323 06:41:37 -- common/autotest_common.sh@862 -- # return 0 00:16:42.323 06:41:37 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:42.323 06:41:37 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:42.323 06:41:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:42.581 06:41:37 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:42.581 06:41:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:42.840 nvme0n1 00:16:42.840 06:41:38 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:42.840 06:41:38 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:43.099 Running I/O for 2 seconds... 00:16:45.004 00:16:45.004 Latency(us) 00:16:45.004 [2024-12-05T06:41:40.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.004 [2024-12-05T06:41:40.470Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:45.004 nvme0n1 : 2.01 16315.63 63.73 0.00 0.00 7840.46 7030.23 21328.99 00:16:45.004 [2024-12-05T06:41:40.470Z] =================================================================================================================== 00:16:45.004 [2024-12-05T06:41:40.470Z] Total : 16315.63 63.73 0.00 0.00 7840.46 7030.23 21328.99 00:16:45.004 0 00:16:45.004 06:41:40 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:45.004 06:41:40 -- host/digest.sh@92 -- # get_accel_stats 00:16:45.004 06:41:40 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:45.004 06:41:40 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:45.004 | select(.opcode=="crc32c") 00:16:45.004 | "\(.module_name) \(.executed)"' 00:16:45.004 06:41:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:45.263 06:41:40 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:45.263 06:41:40 -- host/digest.sh@93 -- # exp_module=software 00:16:45.263 06:41:40 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:45.263 06:41:40 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:45.263 06:41:40 -- host/digest.sh@97 -- # killprocess 83227 00:16:45.263 06:41:40 -- common/autotest_common.sh@936 -- # '[' -z 83227 ']' 00:16:45.263 06:41:40 -- common/autotest_common.sh@940 -- # kill -0 83227 00:16:45.263 06:41:40 -- common/autotest_common.sh@941 -- # uname 00:16:45.263 06:41:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:45.263 06:41:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83227 00:16:45.263 06:41:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:45.263 killing process with pid 83227 00:16:45.263 06:41:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:45.263 06:41:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83227' 00:16:45.263 06:41:40 -- common/autotest_common.sh@955 -- # kill 83227 00:16:45.263 Received shutdown signal, test time was about 2.000000 seconds 00:16:45.263 00:16:45.263 Latency(us) 00:16:45.263 [2024-12-05T06:41:40.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.263 [2024-12-05T06:41:40.729Z] =================================================================================================================== 00:16:45.263 [2024-12-05T06:41:40.729Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:45.263 06:41:40 -- common/autotest_common.sh@960 -- # wait 83227 00:16:45.522 06:41:40 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:16:45.522 06:41:40 -- host/digest.sh@77 -- # local rw bs qd 00:16:45.522 06:41:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:45.522 06:41:40 -- host/digest.sh@80 -- # rw=randread 00:16:45.522 06:41:40 -- host/digest.sh@80 -- # bs=131072 00:16:45.522 06:41:40 -- host/digest.sh@80 -- # qd=16 00:16:45.522 06:41:40 -- host/digest.sh@82 -- # bperfpid=83275 00:16:45.522 06:41:40 -- host/digest.sh@83 -- # waitforlisten 83275 /var/tmp/bperf.sock 00:16:45.522 06:41:40 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:45.522 06:41:40 -- common/autotest_common.sh@829 -- # '[' -z 83275 ']' 00:16:45.522 06:41:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:45.522 06:41:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:45.522 06:41:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:45.522 06:41:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.522 06:41:40 -- common/autotest_common.sh@10 -- # set +x 00:16:45.522 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:45.522 Zero copy mechanism will not be used. 00:16:45.522 [2024-12-05 06:41:40.901097] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:45.522 [2024-12-05 06:41:40.901181] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83275 ] 00:16:45.782 [2024-12-05 06:41:41.030158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.782 [2024-12-05 06:41:41.065275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.782 06:41:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.782 06:41:41 -- common/autotest_common.sh@862 -- # return 0 00:16:45.782 06:41:41 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:45.782 06:41:41 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:45.782 06:41:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:46.041 06:41:41 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:46.041 06:41:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:46.299 nvme0n1 00:16:46.299 06:41:41 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:46.299 06:41:41 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:46.616 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:46.616 Zero copy mechanism will not be used. 00:16:46.616 Running I/O for 2 seconds... 00:16:48.517 00:16:48.517 Latency(us) 00:16:48.517 [2024-12-05T06:41:43.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.517 [2024-12-05T06:41:43.983Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:48.517 nvme0n1 : 2.00 8105.03 1013.13 0.00 0.00 1971.35 1683.08 6315.29 00:16:48.517 [2024-12-05T06:41:43.983Z] =================================================================================================================== 00:16:48.517 [2024-12-05T06:41:43.983Z] Total : 8105.03 1013.13 0.00 0.00 1971.35 1683.08 6315.29 00:16:48.517 0 00:16:48.517 06:41:43 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:48.517 06:41:43 -- host/digest.sh@92 -- # get_accel_stats 00:16:48.517 06:41:43 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:48.517 06:41:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:48.517 06:41:43 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:48.517 | select(.opcode=="crc32c") 00:16:48.517 | "\(.module_name) \(.executed)"' 00:16:48.775 06:41:44 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:48.776 06:41:44 -- host/digest.sh@93 -- # exp_module=software 00:16:48.776 06:41:44 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:48.776 06:41:44 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:48.776 06:41:44 -- host/digest.sh@97 -- # killprocess 83275 00:16:48.776 06:41:44 -- common/autotest_common.sh@936 -- # '[' -z 83275 ']' 00:16:48.776 06:41:44 -- common/autotest_common.sh@940 -- # kill -0 83275 00:16:48.776 06:41:44 -- common/autotest_common.sh@941 -- # uname 00:16:48.776 06:41:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.776 06:41:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83275 00:16:48.776 06:41:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:48.776 killing process with pid 83275 00:16:48.776 06:41:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:48.776 06:41:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83275' 00:16:48.776 Received shutdown signal, test time was about 2.000000 seconds 00:16:48.776 00:16:48.776 Latency(us) 00:16:48.776 [2024-12-05T06:41:44.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.776 [2024-12-05T06:41:44.242Z] =================================================================================================================== 00:16:48.776 [2024-12-05T06:41:44.242Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:48.776 06:41:44 -- common/autotest_common.sh@955 -- # kill 83275 00:16:48.776 06:41:44 -- common/autotest_common.sh@960 -- # wait 83275 00:16:49.034 06:41:44 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:16:49.034 06:41:44 -- host/digest.sh@77 -- # local rw bs qd 00:16:49.034 06:41:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:49.034 06:41:44 -- host/digest.sh@80 -- # rw=randwrite 00:16:49.034 06:41:44 -- host/digest.sh@80 -- # bs=4096 00:16:49.034 06:41:44 -- host/digest.sh@80 -- # qd=128 00:16:49.034 06:41:44 -- host/digest.sh@82 -- # bperfpid=83328 00:16:49.034 06:41:44 -- host/digest.sh@83 -- # waitforlisten 83328 /var/tmp/bperf.sock 00:16:49.034 06:41:44 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:49.034 06:41:44 -- common/autotest_common.sh@829 -- # '[' -z 83328 ']' 00:16:49.034 06:41:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:49.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:49.034 06:41:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:49.034 06:41:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:49.034 06:41:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:49.034 06:41:44 -- common/autotest_common.sh@10 -- # set +x 00:16:49.034 [2024-12-05 06:41:44.322723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:49.034 [2024-12-05 06:41:44.322839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83328 ] 00:16:49.034 [2024-12-05 06:41:44.460086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.034 [2024-12-05 06:41:44.494874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.967 06:41:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.967 06:41:45 -- common/autotest_common.sh@862 -- # return 0 00:16:49.967 06:41:45 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:49.967 06:41:45 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:49.967 06:41:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:50.224 06:41:45 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:50.224 06:41:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:50.482 nvme0n1 00:16:50.482 06:41:45 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:50.482 06:41:45 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:50.482 Running I/O for 2 seconds... 00:16:53.008 00:16:53.008 Latency(us) 00:16:53.008 [2024-12-05T06:41:48.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.008 [2024-12-05T06:41:48.474Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:53.008 nvme0n1 : 2.00 17502.27 68.37 0.00 0.00 7307.54 6642.97 16086.11 00:16:53.008 [2024-12-05T06:41:48.474Z] =================================================================================================================== 00:16:53.008 [2024-12-05T06:41:48.474Z] Total : 17502.27 68.37 0.00 0.00 7307.54 6642.97 16086.11 00:16:53.008 0 00:16:53.008 06:41:47 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:53.008 06:41:47 -- host/digest.sh@92 -- # get_accel_stats 00:16:53.008 06:41:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:53.008 06:41:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:53.008 | select(.opcode=="crc32c") 00:16:53.008 | "\(.module_name) \(.executed)"' 00:16:53.008 06:41:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:53.008 06:41:48 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:53.009 06:41:48 -- host/digest.sh@93 -- # exp_module=software 00:16:53.009 06:41:48 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:53.009 06:41:48 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:53.009 06:41:48 -- host/digest.sh@97 -- # killprocess 83328 00:16:53.009 06:41:48 -- common/autotest_common.sh@936 -- # '[' -z 83328 ']' 00:16:53.009 06:41:48 -- common/autotest_common.sh@940 -- # kill -0 83328 00:16:53.009 06:41:48 -- common/autotest_common.sh@941 -- # uname 00:16:53.009 06:41:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.009 06:41:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83328 00:16:53.009 06:41:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:53.009 killing process with pid 83328 00:16:53.009 06:41:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:53.009 06:41:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83328' 00:16:53.009 06:41:48 -- common/autotest_common.sh@955 -- # kill 83328 00:16:53.009 Received shutdown signal, test time was about 2.000000 seconds 00:16:53.009 00:16:53.009 Latency(us) 00:16:53.009 [2024-12-05T06:41:48.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.009 [2024-12-05T06:41:48.475Z] =================================================================================================================== 00:16:53.009 [2024-12-05T06:41:48.475Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.009 06:41:48 -- common/autotest_common.sh@960 -- # wait 83328 00:16:53.009 06:41:48 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:16:53.009 06:41:48 -- host/digest.sh@77 -- # local rw bs qd 00:16:53.009 06:41:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:53.009 06:41:48 -- host/digest.sh@80 -- # rw=randwrite 00:16:53.009 06:41:48 -- host/digest.sh@80 -- # bs=131072 00:16:53.009 06:41:48 -- host/digest.sh@80 -- # qd=16 00:16:53.009 06:41:48 -- host/digest.sh@82 -- # bperfpid=83384 00:16:53.009 06:41:48 -- host/digest.sh@83 -- # waitforlisten 83384 /var/tmp/bperf.sock 00:16:53.009 06:41:48 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:53.009 06:41:48 -- common/autotest_common.sh@829 -- # '[' -z 83384 ']' 00:16:53.009 06:41:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:53.009 06:41:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:53.009 06:41:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:53.009 06:41:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.009 06:41:48 -- common/autotest_common.sh@10 -- # set +x 00:16:53.009 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:53.009 Zero copy mechanism will not be used. 00:16:53.009 [2024-12-05 06:41:48.430570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:53.009 [2024-12-05 06:41:48.430654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83384 ] 00:16:53.268 [2024-12-05 06:41:48.565080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.268 [2024-12-05 06:41:48.597823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.268 06:41:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.268 06:41:48 -- common/autotest_common.sh@862 -- # return 0 00:16:53.268 06:41:48 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:53.268 06:41:48 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:53.268 06:41:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:53.526 06:41:48 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:53.526 06:41:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:54.092 nvme0n1 00:16:54.092 06:41:49 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:54.092 06:41:49 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:54.092 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:54.092 Zero copy mechanism will not be used. 00:16:54.092 Running I/O for 2 seconds... 00:16:55.991 00:16:55.991 Latency(us) 00:16:55.991 [2024-12-05T06:41:51.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.991 [2024-12-05T06:41:51.457Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:55.991 nvme0n1 : 2.00 6522.86 815.36 0.00 0.00 2447.80 1742.66 7089.80 00:16:55.991 [2024-12-05T06:41:51.457Z] =================================================================================================================== 00:16:55.991 [2024-12-05T06:41:51.457Z] Total : 6522.86 815.36 0.00 0.00 2447.80 1742.66 7089.80 00:16:55.991 0 00:16:55.991 06:41:51 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:55.991 06:41:51 -- host/digest.sh@92 -- # get_accel_stats 00:16:55.991 06:41:51 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:55.991 06:41:51 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:55.991 | select(.opcode=="crc32c") 00:16:55.991 | "\(.module_name) \(.executed)"' 00:16:55.991 06:41:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:56.249 06:41:51 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:56.249 06:41:51 -- host/digest.sh@93 -- # exp_module=software 00:16:56.249 06:41:51 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:56.249 06:41:51 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:56.249 06:41:51 -- host/digest.sh@97 -- # killprocess 83384 00:16:56.249 06:41:51 -- common/autotest_common.sh@936 -- # '[' -z 83384 ']' 00:16:56.249 06:41:51 -- common/autotest_common.sh@940 -- # kill -0 83384 00:16:56.249 06:41:51 -- common/autotest_common.sh@941 -- # uname 00:16:56.249 06:41:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.249 06:41:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83384 00:16:56.249 killing process with pid 83384 00:16:56.249 Received shutdown signal, test time was about 2.000000 seconds 00:16:56.249 00:16:56.249 Latency(us) 00:16:56.249 [2024-12-05T06:41:51.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.249 [2024-12-05T06:41:51.715Z] =================================================================================================================== 00:16:56.249 [2024-12-05T06:41:51.715Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:56.249 06:41:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:56.249 06:41:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:56.249 06:41:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83384' 00:16:56.249 06:41:51 -- common/autotest_common.sh@955 -- # kill 83384 00:16:56.249 06:41:51 -- common/autotest_common.sh@960 -- # wait 83384 00:16:56.508 06:41:51 -- host/digest.sh@126 -- # killprocess 83197 00:16:56.508 06:41:51 -- common/autotest_common.sh@936 -- # '[' -z 83197 ']' 00:16:56.508 06:41:51 -- common/autotest_common.sh@940 -- # kill -0 83197 00:16:56.508 06:41:51 -- common/autotest_common.sh@941 -- # uname 00:16:56.508 06:41:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.508 06:41:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83197 00:16:56.508 killing process with pid 83197 00:16:56.508 06:41:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:56.508 06:41:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:56.508 06:41:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83197' 00:16:56.508 06:41:51 -- common/autotest_common.sh@955 -- # kill 83197 00:16:56.508 06:41:51 -- common/autotest_common.sh@960 -- # wait 83197 00:16:56.767 ************************************ 00:16:56.767 END TEST nvmf_digest_clean 00:16:56.767 ************************************ 00:16:56.767 00:16:56.767 real 0m15.116s 00:16:56.767 user 0m29.368s 00:16:56.767 sys 0m4.333s 00:16:56.767 06:41:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:56.767 06:41:51 -- common/autotest_common.sh@10 -- # set +x 00:16:56.767 06:41:52 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:16:56.767 06:41:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:56.767 06:41:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:56.767 06:41:52 -- common/autotest_common.sh@10 -- # set +x 00:16:56.767 ************************************ 00:16:56.767 START TEST nvmf_digest_error 00:16:56.767 ************************************ 00:16:56.767 06:41:52 -- common/autotest_common.sh@1114 -- # run_digest_error 00:16:56.767 06:41:52 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:16:56.767 06:41:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:56.767 06:41:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:56.767 06:41:52 -- common/autotest_common.sh@10 -- # set +x 00:16:56.767 06:41:52 -- nvmf/common.sh@469 -- # nvmfpid=83459 00:16:56.767 06:41:52 -- nvmf/common.sh@470 -- # waitforlisten 83459 00:16:56.767 06:41:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:56.767 06:41:52 -- common/autotest_common.sh@829 -- # '[' -z 83459 ']' 00:16:56.767 06:41:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.767 06:41:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.767 06:41:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.767 06:41:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.767 06:41:52 -- common/autotest_common.sh@10 -- # set +x 00:16:56.767 [2024-12-05 06:41:52.103805] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:56.768 [2024-12-05 06:41:52.103903] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.027 [2024-12-05 06:41:52.243456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.027 [2024-12-05 06:41:52.278505] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:57.027 [2024-12-05 06:41:52.278631] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.027 [2024-12-05 06:41:52.278645] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.027 [2024-12-05 06:41:52.278653] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.027 [2024-12-05 06:41:52.278676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.027 06:41:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.027 06:41:52 -- common/autotest_common.sh@862 -- # return 0 00:16:57.027 06:41:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:57.027 06:41:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:57.027 06:41:52 -- common/autotest_common.sh@10 -- # set +x 00:16:57.027 06:41:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.027 06:41:52 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:57.027 06:41:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.027 06:41:52 -- common/autotest_common.sh@10 -- # set +x 00:16:57.027 [2024-12-05 06:41:52.395020] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:57.027 06:41:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.027 06:41:52 -- host/digest.sh@104 -- # common_target_config 00:16:57.027 06:41:52 -- host/digest.sh@43 -- # rpc_cmd 00:16:57.027 06:41:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.027 06:41:52 -- common/autotest_common.sh@10 -- # set +x 00:16:57.027 null0 00:16:57.028 [2024-12-05 06:41:52.466044] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.028 [2024-12-05 06:41:52.490208] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.287 06:41:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.287 06:41:52 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:16:57.287 06:41:52 -- host/digest.sh@54 -- # local rw bs qd 00:16:57.287 06:41:52 -- host/digest.sh@56 -- # rw=randread 00:16:57.287 06:41:52 -- host/digest.sh@56 -- # bs=4096 00:16:57.287 06:41:52 -- host/digest.sh@56 -- # qd=128 00:16:57.287 06:41:52 -- host/digest.sh@58 -- # bperfpid=83485 00:16:57.287 06:41:52 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:57.287 06:41:52 -- host/digest.sh@60 -- # waitforlisten 83485 /var/tmp/bperf.sock 00:16:57.287 06:41:52 -- common/autotest_common.sh@829 -- # '[' -z 83485 ']' 00:16:57.287 06:41:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:57.287 06:41:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.287 06:41:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:57.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:57.287 06:41:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.287 06:41:52 -- common/autotest_common.sh@10 -- # set +x 00:16:57.287 [2024-12-05 06:41:52.546818] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:57.287 [2024-12-05 06:41:52.547120] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83485 ] 00:16:57.287 [2024-12-05 06:41:52.685574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.287 [2024-12-05 06:41:52.718005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.224 06:41:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.224 06:41:53 -- common/autotest_common.sh@862 -- # return 0 00:16:58.224 06:41:53 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:58.224 06:41:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:58.483 06:41:53 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:58.483 06:41:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.483 06:41:53 -- common/autotest_common.sh@10 -- # set +x 00:16:58.483 06:41:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.483 06:41:53 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:58.483 06:41:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:58.742 nvme0n1 00:16:58.742 06:41:54 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:58.742 06:41:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.742 06:41:54 -- common/autotest_common.sh@10 -- # set +x 00:16:58.742 06:41:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.742 06:41:54 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:58.742 06:41:54 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:59.001 Running I/O for 2 seconds... 00:16:59.001 [2024-12-05 06:41:54.239685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.001 [2024-12-05 06:41:54.239751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.001 [2024-12-05 06:41:54.239781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.001 [2024-12-05 06:41:54.255736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.001 [2024-12-05 06:41:54.255775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.001 [2024-12-05 06:41:54.255804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.001 [2024-12-05 06:41:54.271533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.001 [2024-12-05 06:41:54.271771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.001 [2024-12-05 06:41:54.271790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.001 [2024-12-05 06:41:54.287235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.001 [2024-12-05 06:41:54.287482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.001 [2024-12-05 06:41:54.287500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.001 [2024-12-05 06:41:54.302546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.001 [2024-12-05 06:41:54.302738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.001 [2024-12-05 06:41:54.302755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.001 [2024-12-05 06:41:54.318448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.001 [2024-12-05 06:41:54.318488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.001 [2024-12-05 06:41:54.318501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.001 [2024-12-05 06:41:54.335350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.001 [2024-12-05 06:41:54.335390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.001 [2024-12-05 06:41:54.335404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.001 [2024-12-05 06:41:54.352765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.001 [2024-12-05 06:41:54.352803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.001 [2024-12-05 06:41:54.352832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.001 [2024-12-05 06:41:54.368036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.001 [2024-12-05 06:41:54.368073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.001 [2024-12-05 06:41:54.368101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.001 [2024-12-05 06:41:54.384372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.001 [2024-12-05 06:41:54.384596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.001 [2024-12-05 06:41:54.384614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.001 [2024-12-05 06:41:54.400349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.001 [2024-12-05 06:41:54.400385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.001 [2024-12-05 06:41:54.400412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.001 [2024-12-05 06:41:54.414977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.001 [2024-12-05 06:41:54.415012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.002 [2024-12-05 06:41:54.415039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.002 [2024-12-05 06:41:54.429903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.002 [2024-12-05 06:41:54.430088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.002 [2024-12-05 06:41:54.430105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.002 [2024-12-05 06:41:54.444715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.002 [2024-12-05 06:41:54.444899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.002 [2024-12-05 06:41:54.444915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.002 [2024-12-05 06:41:54.461481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.002 [2024-12-05 06:41:54.461526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.002 [2024-12-05 06:41:54.461561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.479420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.479463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.479478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.494448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.494637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.494654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.509551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.509733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.509750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.524311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.524375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.524403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.538984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.539020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.539047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.553518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.553553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.553580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.568244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.568279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.568307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.584319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.584558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.584576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.602377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.602442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.602472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.620503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.620541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.620570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.637344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.637405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.637435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.653661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.653698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.653726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.669037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.669072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.669099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.684283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.684507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.684524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.699400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.699576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.699607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.261 [2024-12-05 06:41:54.714380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.261 [2024-12-05 06:41:54.714569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.261 [2024-12-05 06:41:54.714585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.730934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.730978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.731009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.747520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.747563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.747578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.763134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.763171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.763200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.778724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.778760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.778787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.793823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.793858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.793886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.808899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.808934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.808961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.824152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.824346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.824365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.839493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.839532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.839546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.855234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.855271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.855319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.871648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.871888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.871908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.887937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.887980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.888009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.904105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.904142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.904171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.919375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.919413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.919427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.934351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.934383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.934394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.949572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.949760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.949778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.964660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.964695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.522 [2024-12-05 06:41:54.964722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.522 [2024-12-05 06:41:54.979634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.522 [2024-12-05 06:41:54.979819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.523 [2024-12-05 06:41:54.979835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.780 [2024-12-05 06:41:54.995818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.780 [2024-12-05 06:41:54.995856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.780 [2024-12-05 06:41:54.995885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.780 [2024-12-05 06:41:55.010676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.780 [2024-12-05 06:41:55.010712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.780 [2024-12-05 06:41:55.010739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.780 [2024-12-05 06:41:55.025977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.780 [2024-12-05 06:41:55.026013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.780 [2024-12-05 06:41:55.026042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.780 [2024-12-05 06:41:55.042010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.780 [2024-12-05 06:41:55.042048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.780 [2024-12-05 06:41:55.042077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.780 [2024-12-05 06:41:55.058388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.780 [2024-12-05 06:41:55.058425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.781 [2024-12-05 06:41:55.058453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.781 [2024-12-05 06:41:55.073578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.781 [2024-12-05 06:41:55.073614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.781 [2024-12-05 06:41:55.073641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.781 [2024-12-05 06:41:55.088368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.781 [2024-12-05 06:41:55.088403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.781 [2024-12-05 06:41:55.088430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.781 [2024-12-05 06:41:55.103106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.781 [2024-12-05 06:41:55.103141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.781 [2024-12-05 06:41:55.103168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.781 [2024-12-05 06:41:55.118050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.781 [2024-12-05 06:41:55.118085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.781 [2024-12-05 06:41:55.118113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.781 [2024-12-05 06:41:55.133049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.781 [2024-12-05 06:41:55.133242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.781 [2024-12-05 06:41:55.133274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.781 [2024-12-05 06:41:55.148170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.781 [2024-12-05 06:41:55.148387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.781 [2024-12-05 06:41:55.148405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.781 [2024-12-05 06:41:55.163673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.781 [2024-12-05 06:41:55.163892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.781 [2024-12-05 06:41:55.163919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.781 [2024-12-05 06:41:55.179464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.781 [2024-12-05 06:41:55.179507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.781 [2024-12-05 06:41:55.179521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.781 [2024-12-05 06:41:55.195838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.781 [2024-12-05 06:41:55.195874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.781 [2024-12-05 06:41:55.195902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.781 [2024-12-05 06:41:55.212480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.781 [2024-12-05 06:41:55.212518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.781 [2024-12-05 06:41:55.212548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.781 [2024-12-05 06:41:55.236460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:16:59.781 [2024-12-05 06:41:55.236495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.781 [2024-12-05 06:41:55.236524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.253126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.253304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.253355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.269218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.269255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.269283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.286486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.286525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.286540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.304596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.304634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.304663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.321457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.321496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.321525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.338179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.338228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.338240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.355942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.356134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.356151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.373354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.373403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.373419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.389355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.389390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.389417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.405253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.405288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.405316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.420776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.420811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.420839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.436683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.436722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.436750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.451790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.451978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.451994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.467384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.467424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.467438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.484319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.484397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.484427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.039 [2024-12-05 06:41:55.501006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.039 [2024-12-05 06:41:55.501061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.039 [2024-12-05 06:41:55.501089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.296 [2024-12-05 06:41:55.517026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.296 [2024-12-05 06:41:55.517064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.296 [2024-12-05 06:41:55.517092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.296 [2024-12-05 06:41:55.532482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.296 [2024-12-05 06:41:55.532517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.296 [2024-12-05 06:41:55.532545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.296 [2024-12-05 06:41:55.549774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.296 [2024-12-05 06:41:55.549811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.296 [2024-12-05 06:41:55.549838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.296 [2024-12-05 06:41:55.567703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.296 [2024-12-05 06:41:55.567920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.296 [2024-12-05 06:41:55.567940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.296 [2024-12-05 06:41:55.585605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.296 [2024-12-05 06:41:55.585776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.296 [2024-12-05 06:41:55.585793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.296 [2024-12-05 06:41:55.603394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.296 [2024-12-05 06:41:55.603433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.296 [2024-12-05 06:41:55.603447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.296 [2024-12-05 06:41:55.619878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.296 [2024-12-05 06:41:55.619946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.296 [2024-12-05 06:41:55.619974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.296 [2024-12-05 06:41:55.634678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.296 [2024-12-05 06:41:55.634727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.296 [2024-12-05 06:41:55.634755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.296 [2024-12-05 06:41:55.649395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.296 [2024-12-05 06:41:55.649429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.296 [2024-12-05 06:41:55.649456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.296 [2024-12-05 06:41:55.664065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.296 [2024-12-05 06:41:55.664250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.296 [2024-12-05 06:41:55.664266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.296 [2024-12-05 06:41:55.678983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.296 [2024-12-05 06:41:55.679172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.296 [2024-12-05 06:41:55.679195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.296 [2024-12-05 06:41:55.694710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.297 [2024-12-05 06:41:55.694748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.297 [2024-12-05 06:41:55.694776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.297 [2024-12-05 06:41:55.709855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.297 [2024-12-05 06:41:55.709905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.297 [2024-12-05 06:41:55.709933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.297 [2024-12-05 06:41:55.724698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.297 [2024-12-05 06:41:55.724733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.297 [2024-12-05 06:41:55.724760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.297 [2024-12-05 06:41:55.739327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.297 [2024-12-05 06:41:55.739366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.297 [2024-12-05 06:41:55.739379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.297 [2024-12-05 06:41:55.753924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.297 [2024-12-05 06:41:55.753959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.297 [2024-12-05 06:41:55.753986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.555 [2024-12-05 06:41:55.770015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.555 [2024-12-05 06:41:55.770055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.555 [2024-12-05 06:41:55.770083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.555 [2024-12-05 06:41:55.784866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.555 [2024-12-05 06:41:55.784904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.555 [2024-12-05 06:41:55.784931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.555 [2024-12-05 06:41:55.799624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.555 [2024-12-05 06:41:55.799794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.555 [2024-12-05 06:41:55.799811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.555 [2024-12-05 06:41:55.814423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.555 [2024-12-05 06:41:55.814604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.555 [2024-12-05 06:41:55.814620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.555 [2024-12-05 06:41:55.829143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.555 [2024-12-05 06:41:55.829180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.555 [2024-12-05 06:41:55.829207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.555 [2024-12-05 06:41:55.843906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.555 [2024-12-05 06:41:55.844107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.555 [2024-12-05 06:41:55.844123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.555 [2024-12-05 06:41:55.859091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.555 [2024-12-05 06:41:55.859259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.555 [2024-12-05 06:41:55.859275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.555 [2024-12-05 06:41:55.875019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.555 [2024-12-05 06:41:55.875058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.555 [2024-12-05 06:41:55.875086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.555 [2024-12-05 06:41:55.890648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.555 [2024-12-05 06:41:55.890683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.555 [2024-12-05 06:41:55.890710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.555 [2024-12-05 06:41:55.905838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.555 [2024-12-05 06:41:55.905873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.555 [2024-12-05 06:41:55.905901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.555 [2024-12-05 06:41:55.920704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.555 [2024-12-05 06:41:55.920738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.555 [2024-12-05 06:41:55.920766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.555 [2024-12-05 06:41:55.935470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.555 [2024-12-05 06:41:55.935664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.555 [2024-12-05 06:41:55.935680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.555 [2024-12-05 06:41:55.951279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.555 [2024-12-05 06:41:55.951505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.555 [2024-12-05 06:41:55.951523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.555 [2024-12-05 06:41:55.966955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.555 [2024-12-05 06:41:55.967142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.555 [2024-12-05 06:41:55.967159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.555 [2024-12-05 06:41:55.982026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.556 [2024-12-05 06:41:55.982212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.556 [2024-12-05 06:41:55.982228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.556 [2024-12-05 06:41:55.997253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.556 [2024-12-05 06:41:55.997290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.556 [2024-12-05 06:41:55.997318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.556 [2024-12-05 06:41:56.012116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.556 [2024-12-05 06:41:56.012298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.556 [2024-12-05 06:41:56.012330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.814 [2024-12-05 06:41:56.028062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.814 [2024-12-05 06:41:56.028100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-12-05 06:41:56.028128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.814 [2024-12-05 06:41:56.043020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.814 [2024-12-05 06:41:56.043193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-12-05 06:41:56.043225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.814 [2024-12-05 06:41:56.058142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.814 [2024-12-05 06:41:56.058180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-12-05 06:41:56.058209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.814 [2024-12-05 06:41:56.074645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.814 [2024-12-05 06:41:56.074680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-12-05 06:41:56.074708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.814 [2024-12-05 06:41:56.089959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.814 [2024-12-05 06:41:56.089995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-12-05 06:41:56.090023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.814 [2024-12-05 06:41:56.105053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.814 [2024-12-05 06:41:56.105089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-12-05 06:41:56.105116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.814 [2024-12-05 06:41:56.120057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.814 [2024-12-05 06:41:56.120227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-12-05 06:41:56.120243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.814 [2024-12-05 06:41:56.135100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.814 [2024-12-05 06:41:56.135136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.814 [2024-12-05 06:41:56.135164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.814 [2024-12-05 06:41:56.149991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.815 [2024-12-05 06:41:56.150177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.815 [2024-12-05 06:41:56.150194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.815 [2024-12-05 06:41:56.164985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.815 [2024-12-05 06:41:56.165021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.815 [2024-12-05 06:41:56.165048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.815 [2024-12-05 06:41:56.179906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.815 [2024-12-05 06:41:56.180090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.815 [2024-12-05 06:41:56.180107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.815 [2024-12-05 06:41:56.195098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.815 [2024-12-05 06:41:56.195275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.815 [2024-12-05 06:41:56.195318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.815 [2024-12-05 06:41:56.211022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b2410) 00:17:00.815 [2024-12-05 06:41:56.211061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.815 [2024-12-05 06:41:56.211089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.815 00:17:00.815 Latency(us) 00:17:00.815 [2024-12-05T06:41:56.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.815 [2024-12-05T06:41:56.281Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:00.815 nvme0n1 : 2.01 16078.54 62.81 0.00 0.00 7955.97 7000.44 32172.22 00:17:00.815 [2024-12-05T06:41:56.281Z] =================================================================================================================== 00:17:00.815 [2024-12-05T06:41:56.281Z] Total : 16078.54 62.81 0.00 0.00 7955.97 7000.44 32172.22 00:17:00.815 0 00:17:00.815 06:41:56 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:00.815 06:41:56 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:00.815 | .driver_specific 00:17:00.815 | .nvme_error 00:17:00.815 | .status_code 00:17:00.815 | .command_transient_transport_error' 00:17:00.815 06:41:56 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:00.815 06:41:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:01.073 06:41:56 -- host/digest.sh@71 -- # (( 126 > 0 )) 00:17:01.073 06:41:56 -- host/digest.sh@73 -- # killprocess 83485 00:17:01.073 06:41:56 -- common/autotest_common.sh@936 -- # '[' -z 83485 ']' 00:17:01.073 06:41:56 -- common/autotest_common.sh@940 -- # kill -0 83485 00:17:01.073 06:41:56 -- common/autotest_common.sh@941 -- # uname 00:17:01.073 06:41:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:01.073 06:41:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83485 00:17:01.073 killing process with pid 83485 00:17:01.073 Received shutdown signal, test time was about 2.000000 seconds 00:17:01.073 00:17:01.073 Latency(us) 00:17:01.073 [2024-12-05T06:41:56.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.073 [2024-12-05T06:41:56.539Z] =================================================================================================================== 00:17:01.073 [2024-12-05T06:41:56.539Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.073 06:41:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:01.073 06:41:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:01.073 06:41:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83485' 00:17:01.073 06:41:56 -- common/autotest_common.sh@955 -- # kill 83485 00:17:01.073 06:41:56 -- common/autotest_common.sh@960 -- # wait 83485 00:17:01.331 06:41:56 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:17:01.331 06:41:56 -- host/digest.sh@54 -- # local rw bs qd 00:17:01.331 06:41:56 -- host/digest.sh@56 -- # rw=randread 00:17:01.331 06:41:56 -- host/digest.sh@56 -- # bs=131072 00:17:01.331 06:41:56 -- host/digest.sh@56 -- # qd=16 00:17:01.331 06:41:56 -- host/digest.sh@58 -- # bperfpid=83547 00:17:01.331 06:41:56 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:01.331 06:41:56 -- host/digest.sh@60 -- # waitforlisten 83547 /var/tmp/bperf.sock 00:17:01.331 06:41:56 -- common/autotest_common.sh@829 -- # '[' -z 83547 ']' 00:17:01.331 06:41:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:01.331 06:41:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.331 06:41:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:01.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:01.331 06:41:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.331 06:41:56 -- common/autotest_common.sh@10 -- # set +x 00:17:01.331 [2024-12-05 06:41:56.709534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:01.332 [2024-12-05 06:41:56.709790] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83547 ] 00:17:01.332 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:01.332 Zero copy mechanism will not be used. 00:17:01.590 [2024-12-05 06:41:56.841360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.590 [2024-12-05 06:41:56.876593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.524 06:41:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.524 06:41:57 -- common/autotest_common.sh@862 -- # return 0 00:17:02.524 06:41:57 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:02.524 06:41:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:02.524 06:41:57 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:02.524 06:41:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.524 06:41:57 -- common/autotest_common.sh@10 -- # set +x 00:17:02.524 06:41:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.524 06:41:57 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:02.524 06:41:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:02.782 nvme0n1 00:17:02.782 06:41:58 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:02.782 06:41:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.782 06:41:58 -- common/autotest_common.sh@10 -- # set +x 00:17:03.041 06:41:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.041 06:41:58 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:03.041 06:41:58 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:03.041 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:03.041 Zero copy mechanism will not be used. 00:17:03.041 Running I/O for 2 seconds... 00:17:03.041 [2024-12-05 06:41:58.359578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.359677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.359721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.363712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.363911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.364046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.368192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.368430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.368544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.372599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.372638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.372666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.376701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.376738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.376767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.380816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.380852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.380881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.384795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.384832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.384861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.388796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.388833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.388861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.392816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.392854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.392883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.397208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.397246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.397275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.401751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.401789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.401818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.406270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.406309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.406364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.410927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.410966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.410995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.415570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.415612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.415627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.419981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.420018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.420046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.424276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.424312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.424391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.428721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.428771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.428800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.433035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.433070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.433099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.437263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.437300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.437358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.441487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.441524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.441552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.445469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.445504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.445531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.449920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.449989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.450002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.454042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.454078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.454106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.458243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.458281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.041 [2024-12-05 06:41:58.458295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.041 [2024-12-05 06:41:58.462352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.041 [2024-12-05 06:41:58.462387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.042 [2024-12-05 06:41:58.462415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.042 [2024-12-05 06:41:58.466479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.042 [2024-12-05 06:41:58.466518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.042 [2024-12-05 06:41:58.466531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.042 [2024-12-05 06:41:58.470479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.042 [2024-12-05 06:41:58.470515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.042 [2024-12-05 06:41:58.470543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.042 [2024-12-05 06:41:58.474512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.042 [2024-12-05 06:41:58.474548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.042 [2024-12-05 06:41:58.474575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.042 [2024-12-05 06:41:58.478550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.042 [2024-12-05 06:41:58.478587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.042 [2024-12-05 06:41:58.478615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.042 [2024-12-05 06:41:58.482645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.042 [2024-12-05 06:41:58.482683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.042 [2024-12-05 06:41:58.482711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.042 [2024-12-05 06:41:58.486543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.042 [2024-12-05 06:41:58.486579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.042 [2024-12-05 06:41:58.486606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.042 [2024-12-05 06:41:58.490538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.042 [2024-12-05 06:41:58.490574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.042 [2024-12-05 06:41:58.490602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.042 [2024-12-05 06:41:58.494611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.042 [2024-12-05 06:41:58.494647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.042 [2024-12-05 06:41:58.494675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.042 [2024-12-05 06:41:58.498582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.042 [2024-12-05 06:41:58.498617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.042 [2024-12-05 06:41:58.498645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.042 [2024-12-05 06:41:58.502918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.042 [2024-12-05 06:41:58.502958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.042 [2024-12-05 06:41:58.502988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.301 [2024-12-05 06:41:58.507466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.301 [2024-12-05 06:41:58.507510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.301 [2024-12-05 06:41:58.507525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.301 [2024-12-05 06:41:58.511920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.301 [2024-12-05 06:41:58.511961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.301 [2024-12-05 06:41:58.511991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.301 [2024-12-05 06:41:58.516044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.301 [2024-12-05 06:41:58.516082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.301 [2024-12-05 06:41:58.516110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.301 [2024-12-05 06:41:58.520636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.301 [2024-12-05 06:41:58.520676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.301 [2024-12-05 06:41:58.520706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.301 [2024-12-05 06:41:58.525493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.301 [2024-12-05 06:41:58.525561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.301 [2024-12-05 06:41:58.525585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.301 [2024-12-05 06:41:58.530131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.301 [2024-12-05 06:41:58.530202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.301 [2024-12-05 06:41:58.530246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.301 [2024-12-05 06:41:58.534957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.535000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.535030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.539579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.539661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.539674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.544227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.544263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.544291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.548713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.548749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.548777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.553160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.553199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.553244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.557560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.557596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.557625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.561716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.561753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.561781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.565819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.565855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.565884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.569758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.569795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.569823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.573697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.573733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.573761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.577633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.577670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.577698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.581639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.581676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.581703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.585690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.585742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.585786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.589654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.589690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.589718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.593721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.593758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.593786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.597834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.597871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.597899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.601919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.601957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.601986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.606273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.606311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.606369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.610778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.610979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.610997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.615620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.615677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.615691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.619961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.620001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.620031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.624264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.624302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.624362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.628646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.628684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.628712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.632923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.632962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.632991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.637307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.637370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.637401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.641559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.302 [2024-12-05 06:41:58.641595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.302 [2024-12-05 06:41:58.641623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.302 [2024-12-05 06:41:58.645950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.645988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.646017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.650255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.650292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.650320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.654440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.654477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.654506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.658686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.658722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.658751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.662733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.662770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.662798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.666931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.666968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.666997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.671070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.671108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.671136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.675360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.675396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.675409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.679468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.679507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.679521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.683902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.683940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.683970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.688050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.688087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.688116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.692450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.692501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.692530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.696583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.696620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.696648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.700591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.700628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.700656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.704897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.704936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.704965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.709122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.709160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.709189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.713461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.713497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.713526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.717659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.717696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.717724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.722388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.722452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.722467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.727158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.727213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.727243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.731909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.732091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.732108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.736685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.736769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.736797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.741114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.741151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.741179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.745544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.745582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.745610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.749818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.749872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.749902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.753984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.754022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.754051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.758361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.303 [2024-12-05 06:41:58.758407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.303 [2024-12-05 06:41:58.758436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.303 [2024-12-05 06:41:58.762803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.304 [2024-12-05 06:41:58.762845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.304 [2024-12-05 06:41:58.762860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.767191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.767231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.767260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.771711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.771756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.771771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.776048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.776087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.776115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.780649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.780689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.780719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.785181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.785233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.785246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.789477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.789528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.789556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.794236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.794272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.794285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.798690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.798728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.798741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.803263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.803322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.803351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.807697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.807729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.807742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.811983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.812033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.812046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.816277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.816338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.816353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.820649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.820684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.820712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.824789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.824837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.824849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.828852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.828901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.828913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.833148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.833184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.833196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.837213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.837263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.837275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.841273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.841321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.841346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.845379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.845436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.845449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.849433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.849481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.849494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.853555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.853603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.853615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.857648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.857680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.857693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.861752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.861788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.861800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.564 [2024-12-05 06:41:58.865798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.564 [2024-12-05 06:41:58.865846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.564 [2024-12-05 06:41:58.865858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.869891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.869940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.869953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.874263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.874311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.874323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.878408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.878456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.878468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.882651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.882700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.882727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.886819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.886883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.886895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.890963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.891012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.891024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.895071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.895119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.895132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.899218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.899278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.899315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.903261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.903328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.903352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.907190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.907239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.907263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.911189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.911238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.911261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.915379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.915413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.915426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.919275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.919340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.919353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.923217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.923276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.923307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.927195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.927243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.927267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.931242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.931307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.931320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.935158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.935207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.935219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.939146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.939194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.939206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.943110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.943158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.943170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.947078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.947125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.947138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.951127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.951174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.951186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.955176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.955224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.955249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.959132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.959179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.959191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.963122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.963170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.963182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.967123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.967171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.967183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.971187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.971234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.971257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.565 [2024-12-05 06:41:58.975315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.565 [2024-12-05 06:41:58.975358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.565 [2024-12-05 06:41:58.975371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.566 [2024-12-05 06:41:58.979233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.566 [2024-12-05 06:41:58.979311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.566 [2024-12-05 06:41:58.979334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.566 [2024-12-05 06:41:58.983095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.566 [2024-12-05 06:41:58.983141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.566 [2024-12-05 06:41:58.983154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.566 [2024-12-05 06:41:58.987120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.566 [2024-12-05 06:41:58.987167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.566 [2024-12-05 06:41:58.987179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.566 [2024-12-05 06:41:58.991027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.566 [2024-12-05 06:41:58.991075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.566 [2024-12-05 06:41:58.991088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.566 [2024-12-05 06:41:58.995099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.566 [2024-12-05 06:41:58.995146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.566 [2024-12-05 06:41:58.995158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.566 [2024-12-05 06:41:58.999375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.566 [2024-12-05 06:41:58.999410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.566 [2024-12-05 06:41:58.999423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.566 [2024-12-05 06:41:59.004129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.566 [2024-12-05 06:41:59.004182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.566 [2024-12-05 06:41:59.004225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.566 [2024-12-05 06:41:59.008600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.566 [2024-12-05 06:41:59.008645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.566 [2024-12-05 06:41:59.008657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.566 [2024-12-05 06:41:59.013018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.566 [2024-12-05 06:41:59.013069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.566 [2024-12-05 06:41:59.013083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.566 [2024-12-05 06:41:59.017320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.566 [2024-12-05 06:41:59.017379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.566 [2024-12-05 06:41:59.017391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.566 [2024-12-05 06:41:59.021517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.566 [2024-12-05 06:41:59.021552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.566 [2024-12-05 06:41:59.021564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.566 [2024-12-05 06:41:59.025934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.566 [2024-12-05 06:41:59.026002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.566 [2024-12-05 06:41:59.026016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.826 [2024-12-05 06:41:59.030314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.826 [2024-12-05 06:41:59.030378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.826 [2024-12-05 06:41:59.030391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.826 [2024-12-05 06:41:59.034671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.826 [2024-12-05 06:41:59.034721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.826 [2024-12-05 06:41:59.034734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.826 [2024-12-05 06:41:59.039075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.826 [2024-12-05 06:41:59.039126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.826 [2024-12-05 06:41:59.039142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.826 [2024-12-05 06:41:59.043494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.826 [2024-12-05 06:41:59.043531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.826 [2024-12-05 06:41:59.043545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.047508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.047545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.047558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.051652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.051714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.051726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.055763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.055810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.055822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.059937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.059985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.059997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.064059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.064108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.064121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.068121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.068169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.068180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.072240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.072288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.072301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.076277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.076324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.076347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.080272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.080320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.080344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.084203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.084250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.084262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.088286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.088342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.088355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.092289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.092348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.092361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.096365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.096423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.096435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.100577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.100626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.100638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.104637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.104685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.104698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.108760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.108808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.108820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.112717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.112765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.112777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.116786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.116834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.116862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.120861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.120909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.120921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.124773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.124820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.124832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.128709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.128757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.128769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.132695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.132741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.132753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.136759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.136807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.136819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.140795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.140842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.140871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.144773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.144820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.144832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.148749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.148796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.827 [2024-12-05 06:41:59.148808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.827 [2024-12-05 06:41:59.152878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.827 [2024-12-05 06:41:59.152926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.152938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.156947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.156996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.157009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.160818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.160881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.160893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.164930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.164977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.164990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.168943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.168990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.169002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.172961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.173009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.173022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.177005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.177054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.177067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.181048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.181096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.181108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.184999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.185063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.185075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.189005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.189052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.189064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.193139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.193187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.193199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.197214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.197261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.197273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.201159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.201206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.201218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.205589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.205636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.205648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.210169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.210235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.210263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.214779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.214829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.214859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.219406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.219439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.219452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.224104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.224159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.224204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.228595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.228642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.228653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.233155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.233220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.233233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.237883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.237945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.237959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.242335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.242391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.242403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.246773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.246821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.246833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.251448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.251484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.251497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.256045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.256084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.256098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.260735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.260784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.260797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.828 [2024-12-05 06:41:59.265360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.828 [2024-12-05 06:41:59.265416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.828 [2024-12-05 06:41:59.265428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.829 [2024-12-05 06:41:59.269900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.829 [2024-12-05 06:41:59.269942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.829 [2024-12-05 06:41:59.269956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.829 [2024-12-05 06:41:59.274682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.829 [2024-12-05 06:41:59.274731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.829 [2024-12-05 06:41:59.274743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.829 [2024-12-05 06:41:59.279169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.829 [2024-12-05 06:41:59.279217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.829 [2024-12-05 06:41:59.279241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.829 [2024-12-05 06:41:59.283771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.829 [2024-12-05 06:41:59.283819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.829 [2024-12-05 06:41:59.283830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.829 [2024-12-05 06:41:59.288692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:03.829 [2024-12-05 06:41:59.288744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.829 [2024-12-05 06:41:59.288772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.293104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.293166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.293179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.297914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.297983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.297996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.302270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.302348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.302362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.306297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.306355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.306367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.310199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.310247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.310259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.314278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.314324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.314338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.318271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.318318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.318341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.322284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.322341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.322355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.326364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.326412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.326424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.330434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.330482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.330494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.334445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.334491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.334503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.338545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.338593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.338605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.342605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.342652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.342663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.346616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.346663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.346675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.350637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.350670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.350682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.354569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.354617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.354629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.358614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.358661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.358672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.362782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.362830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.362858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.366787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.366834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.366862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.370907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.370956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.370968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.375026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.375074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.375086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.379126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.379174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.379187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.383163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.383211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.383223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.387175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.387223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.090 [2024-12-05 06:41:59.387246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.090 [2024-12-05 06:41:59.391224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.090 [2024-12-05 06:41:59.391290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.391334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.395342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.395387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.395399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.399263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.399333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.399358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.403290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.403367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.403379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.407268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.407337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.407364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.411269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.411354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.411368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.415243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.415275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.415307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.419802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.419851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.419863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.424161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.424210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.424222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.428818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.428867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.428880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.433454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.433493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.433506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.437830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.437877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.437889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.442064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.442113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.442125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.446364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.446405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.446418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.450624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.450658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.450670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.454854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.454902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.454915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.459086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.459134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.459146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.463052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.463100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.463112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.466990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.467046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.467059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.471015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.471067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.471079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.475210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.475257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.475269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.479180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.479228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.479239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.483181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.483228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.483240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.487517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.487552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.487565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.492246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.492297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.492310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.496640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.496688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.496700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.501082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.091 [2024-12-05 06:41:59.501131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.091 [2024-12-05 06:41:59.501144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.091 [2024-12-05 06:41:59.505519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.092 [2024-12-05 06:41:59.505567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.092 [2024-12-05 06:41:59.505579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.092 [2024-12-05 06:41:59.509918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.092 [2024-12-05 06:41:59.509967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.092 [2024-12-05 06:41:59.509979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.092 [2024-12-05 06:41:59.514143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.092 [2024-12-05 06:41:59.514190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.092 [2024-12-05 06:41:59.514202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.092 [2024-12-05 06:41:59.518173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.092 [2024-12-05 06:41:59.518220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.092 [2024-12-05 06:41:59.518232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.092 [2024-12-05 06:41:59.522319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.092 [2024-12-05 06:41:59.522377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.092 [2024-12-05 06:41:59.522390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.092 [2024-12-05 06:41:59.526289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.092 [2024-12-05 06:41:59.526347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.092 [2024-12-05 06:41:59.526361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.092 [2024-12-05 06:41:59.530336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.092 [2024-12-05 06:41:59.530383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.092 [2024-12-05 06:41:59.530395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.092 [2024-12-05 06:41:59.534284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.092 [2024-12-05 06:41:59.534344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.092 [2024-12-05 06:41:59.534358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.092 [2024-12-05 06:41:59.538636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.092 [2024-12-05 06:41:59.538684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.092 [2024-12-05 06:41:59.538697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.092 [2024-12-05 06:41:59.543039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.092 [2024-12-05 06:41:59.543089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.092 [2024-12-05 06:41:59.543102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.092 [2024-12-05 06:41:59.547789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.092 [2024-12-05 06:41:59.547837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.092 [2024-12-05 06:41:59.547868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.553119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.553181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.553196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.558057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.558099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.558113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.562905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.562957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.562971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.567291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.567374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.567388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.571578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.571655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.571667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.575716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.575763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.575775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.579849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.579897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.579909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.583784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.583831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.583859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.587855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.587903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.587915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.591867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.591915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.591927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.595930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.595978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.595990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.599896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.599943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.599955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.604055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.604102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.604115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.608022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.608070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.608083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.612027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.612075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.612087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.616098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.616148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.616160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.620200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.620262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.620273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.624394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.624451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.624463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.628417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.628463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.628475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.632513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.632560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.632573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.636808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.636856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.636868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.353 [2024-12-05 06:41:59.640877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.353 [2024-12-05 06:41:59.640941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.353 [2024-12-05 06:41:59.640954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.644962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.645011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.645023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.649060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.649108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.649120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.653257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.653304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.653316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.657245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.657292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.657304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.661272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.661320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.661343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.665338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.665394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.665406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.669375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.669421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.669433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.673445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.673492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.673504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.677584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.677630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.677642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.681675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.681723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.681749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.685693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.685740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.685753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.689689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.689736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.689748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.693769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.693816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.693828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.697862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.697910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.697921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.701846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.701893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.701904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.705991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.706040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.706052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.710195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.710243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.710254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.714278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.714326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.714350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.718177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.718224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.718236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.722270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.722317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.722341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.726540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.726587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.726600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.731055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.731105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.731118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.735726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.735786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.735799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.740328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.740386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.740398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.745240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.745289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.745302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.749933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.749984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.749998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.354 [2024-12-05 06:41:59.754383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.354 [2024-12-05 06:41:59.754441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.354 [2024-12-05 06:41:59.754453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.355 [2024-12-05 06:41:59.758824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.355 [2024-12-05 06:41:59.758904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.355 [2024-12-05 06:41:59.758917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.355 [2024-12-05 06:41:59.763140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.355 [2024-12-05 06:41:59.763189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.355 [2024-12-05 06:41:59.763217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.355 [2024-12-05 06:41:59.767538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.355 [2024-12-05 06:41:59.767574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.355 [2024-12-05 06:41:59.767587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.355 [2024-12-05 06:41:59.771792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.355 [2024-12-05 06:41:59.771839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.355 [2024-12-05 06:41:59.771852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.355 [2024-12-05 06:41:59.776050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.355 [2024-12-05 06:41:59.776114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.355 [2024-12-05 06:41:59.776128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.355 [2024-12-05 06:41:59.780249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.355 [2024-12-05 06:41:59.780297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.355 [2024-12-05 06:41:59.780309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.355 [2024-12-05 06:41:59.784337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.355 [2024-12-05 06:41:59.784395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.355 [2024-12-05 06:41:59.784408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.355 [2024-12-05 06:41:59.788448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.355 [2024-12-05 06:41:59.788495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.355 [2024-12-05 06:41:59.788507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.355 [2024-12-05 06:41:59.792672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.355 [2024-12-05 06:41:59.792720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.355 [2024-12-05 06:41:59.792733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.355 [2024-12-05 06:41:59.797154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.355 [2024-12-05 06:41:59.797202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.355 [2024-12-05 06:41:59.797214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.355 [2024-12-05 06:41:59.801405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.355 [2024-12-05 06:41:59.801433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.355 [2024-12-05 06:41:59.801445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.355 [2024-12-05 06:41:59.805781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.355 [2024-12-05 06:41:59.805828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.355 [2024-12-05 06:41:59.805857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.355 [2024-12-05 06:41:59.810059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.355 [2024-12-05 06:41:59.810108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.355 [2024-12-05 06:41:59.810120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.355 [2024-12-05 06:41:59.814531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.355 [2024-12-05 06:41:59.814599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.355 [2024-12-05 06:41:59.814615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.616 [2024-12-05 06:41:59.819057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.616 [2024-12-05 06:41:59.819109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.616 [2024-12-05 06:41:59.819122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.616 [2024-12-05 06:41:59.823631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.616 [2024-12-05 06:41:59.823724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.616 [2024-12-05 06:41:59.823753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.616 [2024-12-05 06:41:59.828136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.616 [2024-12-05 06:41:59.828187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.616 [2024-12-05 06:41:59.828199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.616 [2024-12-05 06:41:59.832378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.616 [2024-12-05 06:41:59.832425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.616 [2024-12-05 06:41:59.832437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.616 [2024-12-05 06:41:59.836711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.616 [2024-12-05 06:41:59.836760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.616 [2024-12-05 06:41:59.836773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.616 [2024-12-05 06:41:59.840813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.616 [2024-12-05 06:41:59.840860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.616 [2024-12-05 06:41:59.840872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.616 [2024-12-05 06:41:59.845101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.616 [2024-12-05 06:41:59.845150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.616 [2024-12-05 06:41:59.845163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.616 [2024-12-05 06:41:59.849284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.616 [2024-12-05 06:41:59.849340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.616 [2024-12-05 06:41:59.849353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.616 [2024-12-05 06:41:59.853527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.616 [2024-12-05 06:41:59.853574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.616 [2024-12-05 06:41:59.853585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.616 [2024-12-05 06:41:59.857607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.616 [2024-12-05 06:41:59.857654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.616 [2024-12-05 06:41:59.857666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.861680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.861727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.861739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.865672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.865718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.865731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.869720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.869766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.869778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.873968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.874018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.874031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.878427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.878477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.878490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.882932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.882969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.882983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.887464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.887500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.887514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.892054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.892086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.892098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.896546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.896581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.896594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.900949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.900981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.900993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.905491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.905524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.905536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.909833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.909866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.909878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.914187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.914220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.914232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.918253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.918286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.918298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.922285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.922329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.922343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.926246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.926279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.926291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.930479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.930513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.930524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.934446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.934479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.934490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.938506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.938540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.938552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.942394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.942427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.942438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.946603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.946635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.946648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.950712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.950744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.950756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.954861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.954894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.954906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.959117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.959151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.959163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.963149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.963182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.963194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.617 [2024-12-05 06:41:59.967134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.617 [2024-12-05 06:41:59.967167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.617 [2024-12-05 06:41:59.967179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:41:59.971077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:41:59.971139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:41:59.971151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:41:59.975220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:41:59.975265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:41:59.975277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:41:59.979515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:41:59.979547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:41:59.979559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:41:59.983365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:41:59.983398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:41:59.983411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:41:59.987493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:41:59.987527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:41:59.987540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:41:59.991530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:41:59.991566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:41:59.991578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:41:59.995751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:41:59.995783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:41:59.995795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.000043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.000074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.000103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.004717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.004754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.004768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.009168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.009205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.009219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.013662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.013713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.013726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.018173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.018227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.018240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.022631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.022682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.022695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.027181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.027258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.027271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.031874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.031910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.031923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.036568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.036616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.036628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.041247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.041280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.041293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.045996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.046049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.046063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.050578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.050626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.050639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.055361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.055395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.055409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.059935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.059985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.059999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.064507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.064554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.064566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.069379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.069437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.069450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.073930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.073978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.073990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.618 [2024-12-05 06:42:00.078976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.618 [2024-12-05 06:42:00.079026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.618 [2024-12-05 06:42:00.079040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.083795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.083847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.083872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.088582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.088628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.088641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.092900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.092949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.092961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.097140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.097190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.097217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.101299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.101356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.101368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.105293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.105351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.105363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.109246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.109294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.109306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.113319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.113376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.113389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.117355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.117402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.117414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.121463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.121511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.121523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.125530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.125577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.125589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.129799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.129848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.129861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.134096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.134144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.134157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.138267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.138316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.138338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.142349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.142383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.142395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.146456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.146491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.146503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.150508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.150541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.150554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.154544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.154576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.154588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.158551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.158584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.158596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.162490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.162537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.162548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.166523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.166570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.166582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.170622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.170669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.170682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.174645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.174692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.879 [2024-12-05 06:42:00.174703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.879 [2024-12-05 06:42:00.178686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.879 [2024-12-05 06:42:00.178733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.178744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.182746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.182793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.182804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.186766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.186813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.186825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.190815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.190864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.190875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.195169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.195203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.195215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.199325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.199372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.199385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.203580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.203615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.203628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.207845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.207894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.207907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.212033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.212082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.212094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.216122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.216173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.216184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.220275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.220323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.220347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.224311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.224369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.224382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.228299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.228357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.228369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.232372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.232420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.232432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.236374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.236422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.236433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.240497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.240546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.240558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.244900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.244966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.244978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.249278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.249355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.249370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.253500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.253551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.253562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.257553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.257601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.257613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.261674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.261723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.261735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.265739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.265788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.265800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.269650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.269712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.269723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.273762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.273828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.273857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.278283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.278340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.278354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.282700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.282747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.282759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.286945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.880 [2024-12-05 06:42:00.286995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.880 [2024-12-05 06:42:00.287008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.880 [2024-12-05 06:42:00.291162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.881 [2024-12-05 06:42:00.291211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.881 [2024-12-05 06:42:00.291236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.881 [2024-12-05 06:42:00.295456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.881 [2024-12-05 06:42:00.295493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.881 [2024-12-05 06:42:00.295508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.881 [2024-12-05 06:42:00.299762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.881 [2024-12-05 06:42:00.299809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.881 [2024-12-05 06:42:00.299820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.881 [2024-12-05 06:42:00.303964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.881 [2024-12-05 06:42:00.304013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.881 [2024-12-05 06:42:00.304026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.881 [2024-12-05 06:42:00.308221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.881 [2024-12-05 06:42:00.308269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.881 [2024-12-05 06:42:00.308281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.881 [2024-12-05 06:42:00.312435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.881 [2024-12-05 06:42:00.312483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.881 [2024-12-05 06:42:00.312495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.881 [2024-12-05 06:42:00.316602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.881 [2024-12-05 06:42:00.316651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.881 [2024-12-05 06:42:00.316662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.881 [2024-12-05 06:42:00.320816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.881 [2024-12-05 06:42:00.320881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.881 [2024-12-05 06:42:00.320893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.881 [2024-12-05 06:42:00.325018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.881 [2024-12-05 06:42:00.325067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.881 [2024-12-05 06:42:00.325079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.881 [2024-12-05 06:42:00.329238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.881 [2024-12-05 06:42:00.329286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.881 [2024-12-05 06:42:00.329299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.881 [2024-12-05 06:42:00.333389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.881 [2024-12-05 06:42:00.333436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.881 [2024-12-05 06:42:00.333449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.881 [2024-12-05 06:42:00.337496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.881 [2024-12-05 06:42:00.337543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.881 [2024-12-05 06:42:00.337555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.881 [2024-12-05 06:42:00.342264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:04.881 [2024-12-05 06:42:00.342303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.881 [2024-12-05 06:42:00.342328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:05.139 [2024-12-05 06:42:00.346552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:05.139 [2024-12-05 06:42:00.346592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.139 [2024-12-05 06:42:00.346605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:05.139 [2024-12-05 06:42:00.351009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb4a5b0) 00:17:05.139 [2024-12-05 06:42:00.351060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.139 [2024-12-05 06:42:00.351073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:05.139 00:17:05.139 Latency(us) 00:17:05.139 [2024-12-05T06:42:00.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.139 [2024-12-05T06:42:00.605Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:05.139 nvme0n1 : 2.00 7294.20 911.78 0.00 0.00 2190.19 1720.32 8102.63 00:17:05.139 [2024-12-05T06:42:00.605Z] =================================================================================================================== 00:17:05.139 [2024-12-05T06:42:00.605Z] Total : 7294.20 911.78 0.00 0.00 2190.19 1720.32 8102.63 00:17:05.139 0 00:17:05.139 06:42:00 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:05.139 06:42:00 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:05.139 06:42:00 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:05.139 | .driver_specific 00:17:05.139 | .nvme_error 00:17:05.139 | .status_code 00:17:05.139 | .command_transient_transport_error' 00:17:05.139 06:42:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:05.398 06:42:00 -- host/digest.sh@71 -- # (( 471 > 0 )) 00:17:05.398 06:42:00 -- host/digest.sh@73 -- # killprocess 83547 00:17:05.398 06:42:00 -- common/autotest_common.sh@936 -- # '[' -z 83547 ']' 00:17:05.398 06:42:00 -- common/autotest_common.sh@940 -- # kill -0 83547 00:17:05.398 06:42:00 -- common/autotest_common.sh@941 -- # uname 00:17:05.398 06:42:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:05.398 06:42:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83547 00:17:05.398 06:42:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:05.398 killing process with pid 83547 00:17:05.398 06:42:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:05.398 06:42:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83547' 00:17:05.398 Received shutdown signal, test time was about 2.000000 seconds 00:17:05.398 00:17:05.398 Latency(us) 00:17:05.398 [2024-12-05T06:42:00.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.398 [2024-12-05T06:42:00.864Z] =================================================================================================================== 00:17:05.398 [2024-12-05T06:42:00.864Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.398 06:42:00 -- common/autotest_common.sh@955 -- # kill 83547 00:17:05.398 06:42:00 -- common/autotest_common.sh@960 -- # wait 83547 00:17:05.398 06:42:00 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:17:05.398 06:42:00 -- host/digest.sh@54 -- # local rw bs qd 00:17:05.398 06:42:00 -- host/digest.sh@56 -- # rw=randwrite 00:17:05.398 06:42:00 -- host/digest.sh@56 -- # bs=4096 00:17:05.398 06:42:00 -- host/digest.sh@56 -- # qd=128 00:17:05.398 06:42:00 -- host/digest.sh@58 -- # bperfpid=83603 00:17:05.398 06:42:00 -- host/digest.sh@60 -- # waitforlisten 83603 /var/tmp/bperf.sock 00:17:05.398 06:42:00 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:05.398 06:42:00 -- common/autotest_common.sh@829 -- # '[' -z 83603 ']' 00:17:05.398 06:42:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:05.398 06:42:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:05.398 06:42:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:05.398 06:42:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.398 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:17:05.657 [2024-12-05 06:42:00.880103] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:05.657 [2024-12-05 06:42:00.880216] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83603 ] 00:17:05.657 [2024-12-05 06:42:01.016677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.657 [2024-12-05 06:42:01.050636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.591 06:42:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.591 06:42:01 -- common/autotest_common.sh@862 -- # return 0 00:17:06.591 06:42:01 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:06.591 06:42:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:06.850 06:42:02 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:06.850 06:42:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.850 06:42:02 -- common/autotest_common.sh@10 -- # set +x 00:17:06.850 06:42:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.850 06:42:02 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:06.850 06:42:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:07.108 nvme0n1 00:17:07.108 06:42:02 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:07.108 06:42:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.108 06:42:02 -- common/autotest_common.sh@10 -- # set +x 00:17:07.108 06:42:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.108 06:42:02 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:07.108 06:42:02 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:07.368 Running I/O for 2 seconds... 00:17:07.368 [2024-12-05 06:42:02.619525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ddc00 00:17:07.368 [2024-12-05 06:42:02.621124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.368 [2024-12-05 06:42:02.621196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.368 [2024-12-05 06:42:02.636223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fef90 00:17:07.368 [2024-12-05 06:42:02.637734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.368 [2024-12-05 06:42:02.637766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:07.368 [2024-12-05 06:42:02.652040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ff3c8 00:17:07.368 [2024-12-05 06:42:02.653577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.368 [2024-12-05 06:42:02.653608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:07.368 [2024-12-05 06:42:02.666981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190feb58 00:17:07.368 [2024-12-05 06:42:02.668556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.368 [2024-12-05 06:42:02.668592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:07.368 [2024-12-05 06:42:02.682488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fe720 00:17:07.368 [2024-12-05 06:42:02.683868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.368 [2024-12-05 06:42:02.684068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:07.368 [2024-12-05 06:42:02.697695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fe2e8 00:17:07.368 [2024-12-05 06:42:02.699065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.368 [2024-12-05 06:42:02.699100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:07.368 [2024-12-05 06:42:02.712760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fdeb0 00:17:07.368 [2024-12-05 06:42:02.714048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.368 [2024-12-05 06:42:02.714220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:07.368 [2024-12-05 06:42:02.727374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fda78 00:17:07.368 [2024-12-05 06:42:02.728757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.368 [2024-12-05 06:42:02.728944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:07.368 [2024-12-05 06:42:02.741765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fd640 00:17:07.368 [2024-12-05 06:42:02.743118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.368 [2024-12-05 06:42:02.743372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:07.368 [2024-12-05 06:42:02.757392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fd208 00:17:07.368 [2024-12-05 06:42:02.758804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.368 [2024-12-05 06:42:02.758995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:07.368 [2024-12-05 06:42:02.772821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fcdd0 00:17:07.368 [2024-12-05 06:42:02.774254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.368 [2024-12-05 06:42:02.774490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:07.368 [2024-12-05 06:42:02.787937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fc998 00:17:07.368 [2024-12-05 06:42:02.789295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.368 [2024-12-05 06:42:02.789516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:07.368 [2024-12-05 06:42:02.802589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fc560 00:17:07.368 [2024-12-05 06:42:02.803978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.368 [2024-12-05 06:42:02.804173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:07.368 [2024-12-05 06:42:02.818168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fc128 00:17:07.368 [2024-12-05 06:42:02.819688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.368 [2024-12-05 06:42:02.819733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:07.627 [2024-12-05 06:42:02.833930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fbcf0 00:17:07.628 [2024-12-05 06:42:02.835435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:02.835477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:02.848781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fb8b8 00:17:07.628 [2024-12-05 06:42:02.849990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:02.850164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:02.864186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fb480 00:17:07.628 [2024-12-05 06:42:02.865535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:02.865567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:02.879933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fb048 00:17:07.628 [2024-12-05 06:42:02.881219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:02.881249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:02.894613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fac10 00:17:07.628 [2024-12-05 06:42:02.895789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:02.895987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:02.909138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fa7d8 00:17:07.628 [2024-12-05 06:42:02.910322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:02.910503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:02.923658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190fa3a0 00:17:07.628 [2024-12-05 06:42:02.925125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:02.925176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:02.939355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f9f68 00:17:07.628 [2024-12-05 06:42:02.940579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:02.940611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:02.953901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f9b30 00:17:07.628 [2024-12-05 06:42:02.955187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:02.955215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:02.968405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f96f8 00:17:07.628 [2024-12-05 06:42:02.969454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:02.969502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:02.983635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f92c0 00:17:07.628 [2024-12-05 06:42:02.985102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:02.985160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:02.999066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f8e88 00:17:07.628 [2024-12-05 06:42:03.000354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:03.000417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:03.014149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f8a50 00:17:07.628 [2024-12-05 06:42:03.015411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:03.015447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:03.029405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f8618 00:17:07.628 [2024-12-05 06:42:03.030452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:03.030488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:03.044284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f81e0 00:17:07.628 [2024-12-05 06:42:03.045615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:03.045649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:03.059754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f7da8 00:17:07.628 [2024-12-05 06:42:03.060970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:03.061001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:03.074841] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f7970 00:17:07.628 [2024-12-05 06:42:03.076060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:03.076090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:07.628 [2024-12-05 06:42:03.089882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f7538 00:17:07.628 [2024-12-05 06:42:03.091042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.628 [2024-12-05 06:42:03.091082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.887 [2024-12-05 06:42:03.105513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f7100 00:17:07.887 [2024-12-05 06:42:03.106591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.887 [2024-12-05 06:42:03.106629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.887 [2024-12-05 06:42:03.121357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f6cc8 00:17:07.887 [2024-12-05 06:42:03.122457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.887 [2024-12-05 06:42:03.122492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:07.887 [2024-12-05 06:42:03.137633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f6890 00:17:07.887 [2024-12-05 06:42:03.138683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.887 [2024-12-05 06:42:03.138717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:07.887 [2024-12-05 06:42:03.153255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f6458 00:17:07.887 [2024-12-05 06:42:03.154372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.887 [2024-12-05 06:42:03.154430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:07.887 [2024-12-05 06:42:03.170129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f6020 00:17:07.887 [2024-12-05 06:42:03.171277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.887 [2024-12-05 06:42:03.171355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:07.887 [2024-12-05 06:42:03.186637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f5be8 00:17:07.887 [2024-12-05 06:42:03.187644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.887 [2024-12-05 06:42:03.187679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:07.887 [2024-12-05 06:42:03.201989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f57b0 00:17:07.887 [2024-12-05 06:42:03.202938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.888 [2024-12-05 06:42:03.202986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:07.888 [2024-12-05 06:42:03.217248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f5378 00:17:07.888 [2024-12-05 06:42:03.218298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.888 [2024-12-05 06:42:03.218356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:07.888 [2024-12-05 06:42:03.232729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f4f40 00:17:07.888 [2024-12-05 06:42:03.233697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.888 [2024-12-05 06:42:03.233746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:07.888 [2024-12-05 06:42:03.249061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f4b08 00:17:07.888 [2024-12-05 06:42:03.250026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.888 [2024-12-05 06:42:03.250076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:07.888 [2024-12-05 06:42:03.264257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f46d0 00:17:07.888 [2024-12-05 06:42:03.265409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.888 [2024-12-05 06:42:03.265461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:07.888 [2024-12-05 06:42:03.279719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f4298 00:17:07.888 [2024-12-05 06:42:03.280908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.888 [2024-12-05 06:42:03.280947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:07.888 [2024-12-05 06:42:03.295666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f3e60 00:17:07.888 [2024-12-05 06:42:03.296604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.888 [2024-12-05 06:42:03.296653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:07.888 [2024-12-05 06:42:03.310846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f3a28 00:17:07.888 [2024-12-05 06:42:03.311778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.888 [2024-12-05 06:42:03.311971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:07.888 [2024-12-05 06:42:03.325674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f35f0 00:17:07.888 [2024-12-05 06:42:03.326824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.888 [2024-12-05 06:42:03.326858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:07.888 [2024-12-05 06:42:03.341702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f31b8 00:17:07.888 [2024-12-05 06:42:03.342647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.888 [2024-12-05 06:42:03.342683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.358000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f2d80 00:17:08.147 [2024-12-05 06:42:03.358840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.358895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.373685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f2948 00:17:08.147 [2024-12-05 06:42:03.374637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.374676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.389476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f2510 00:17:08.147 [2024-12-05 06:42:03.390349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.390412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.405296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f20d8 00:17:08.147 [2024-12-05 06:42:03.406194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.406230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.420644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f1ca0 00:17:08.147 [2024-12-05 06:42:03.421440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.421475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.435828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f1868 00:17:08.147 [2024-12-05 06:42:03.436847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.436874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.451161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f1430 00:17:08.147 [2024-12-05 06:42:03.452067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.452098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.466351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f0ff8 00:17:08.147 [2024-12-05 06:42:03.467168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.467193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.481769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f0bc0 00:17:08.147 [2024-12-05 06:42:03.482554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.482614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.497106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f0788 00:17:08.147 [2024-12-05 06:42:03.497981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.498015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.514052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190f0350 00:17:08.147 [2024-12-05 06:42:03.514881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.514913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.530204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190eff18 00:17:08.147 [2024-12-05 06:42:03.531094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.531141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.545198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190efae0 00:17:08.147 [2024-12-05 06:42:03.545940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.545972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.559982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ef6a8 00:17:08.147 [2024-12-05 06:42:03.560700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.560731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.574674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ef270 00:17:08.147 [2024-12-05 06:42:03.575437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.575471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.591085] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190eee38 00:17:08.147 [2024-12-05 06:42:03.591916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.591950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:08.147 [2024-12-05 06:42:03.607867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190eea00 00:17:08.147 [2024-12-05 06:42:03.608712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.147 [2024-12-05 06:42:03.608762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.406 [2024-12-05 06:42:03.625261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ee5c8 00:17:08.406 [2024-12-05 06:42:03.626015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.406 [2024-12-05 06:42:03.626046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:08.406 [2024-12-05 06:42:03.642600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ee190 00:17:08.406 [2024-12-05 06:42:03.643363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.406 [2024-12-05 06:42:03.643392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:08.406 [2024-12-05 06:42:03.659046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190edd58 00:17:08.406 [2024-12-05 06:42:03.659828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.406 [2024-12-05 06:42:03.659863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:08.406 [2024-12-05 06:42:03.674250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ed920 00:17:08.406 [2024-12-05 06:42:03.674939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.406 [2024-12-05 06:42:03.674979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:08.406 [2024-12-05 06:42:03.689154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ed4e8 00:17:08.406 [2024-12-05 06:42:03.689842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.406 [2024-12-05 06:42:03.689880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:08.406 [2024-12-05 06:42:03.703686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ed0b0 00:17:08.406 [2024-12-05 06:42:03.704321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.406 [2024-12-05 06:42:03.704360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:08.406 [2024-12-05 06:42:03.718076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ecc78 00:17:08.406 [2024-12-05 06:42:03.718703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.406 [2024-12-05 06:42:03.718729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:08.406 [2024-12-05 06:42:03.732678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ec840 00:17:08.406 [2024-12-05 06:42:03.733288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.406 [2024-12-05 06:42:03.733310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:08.406 [2024-12-05 06:42:03.747366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ec408 00:17:08.406 [2024-12-05 06:42:03.748007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.406 [2024-12-05 06:42:03.748038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:08.406 [2024-12-05 06:42:03.762441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ebfd0 00:17:08.406 [2024-12-05 06:42:03.763034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.406 [2024-12-05 06:42:03.763065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:08.406 [2024-12-05 06:42:03.776811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ebb98 00:17:08.407 [2024-12-05 06:42:03.777363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.407 [2024-12-05 06:42:03.777400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:08.407 [2024-12-05 06:42:03.791139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190eb760 00:17:08.407 [2024-12-05 06:42:03.791825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.407 [2024-12-05 06:42:03.791851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:08.407 [2024-12-05 06:42:03.805467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190eb328 00:17:08.407 [2024-12-05 06:42:03.805990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.407 [2024-12-05 06:42:03.806015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:08.407 [2024-12-05 06:42:03.819989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190eaef0 00:17:08.407 [2024-12-05 06:42:03.820532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.407 [2024-12-05 06:42:03.820557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:08.407 [2024-12-05 06:42:03.834307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190eaab8 00:17:08.407 [2024-12-05 06:42:03.834876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.407 [2024-12-05 06:42:03.834901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:08.407 [2024-12-05 06:42:03.849079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ea680 00:17:08.407 [2024-12-05 06:42:03.849643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.407 [2024-12-05 06:42:03.849669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:08.407 [2024-12-05 06:42:03.863258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190ea248 00:17:08.407 [2024-12-05 06:42:03.863882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.407 [2024-12-05 06:42:03.863907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:08.665 [2024-12-05 06:42:03.878636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e9e10 00:17:08.666 [2024-12-05 06:42:03.879155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:03.879185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:03.892924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e99d8 00:17:08.666 [2024-12-05 06:42:03.893450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:03.893474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:03.907198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e95a0 00:17:08.666 [2024-12-05 06:42:03.907861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:03.907889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:03.921385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e9168 00:17:08.666 [2024-12-05 06:42:03.921845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:03.921871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:03.935583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e8d30 00:17:08.666 [2024-12-05 06:42:03.936053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:03.936079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:03.949612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e88f8 00:17:08.666 [2024-12-05 06:42:03.950056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:03.950081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:03.963847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e84c0 00:17:08.666 [2024-12-05 06:42:03.964287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:03.964312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:03.977919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e8088 00:17:08.666 [2024-12-05 06:42:03.978394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:03.978419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:03.992204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e7c50 00:17:08.666 [2024-12-05 06:42:03.992672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:03.992699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:04.006462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e7818 00:17:08.666 [2024-12-05 06:42:04.006892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:04.006913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:04.022005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e73e0 00:17:08.666 [2024-12-05 06:42:04.022393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:04.022421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:04.036912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e6fa8 00:17:08.666 [2024-12-05 06:42:04.037301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:04.037349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:04.051278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e6b70 00:17:08.666 [2024-12-05 06:42:04.051752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:04.051778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:04.065670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e6738 00:17:08.666 [2024-12-05 06:42:04.066014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:04.066038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:04.080002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e6300 00:17:08.666 [2024-12-05 06:42:04.080383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:04.080405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:04.094133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e5ec8 00:17:08.666 [2024-12-05 06:42:04.094496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:04.094521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:04.108349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e5a90 00:17:08.666 [2024-12-05 06:42:04.108699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:04.108722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:08.666 [2024-12-05 06:42:04.122292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e5658 00:17:08.666 [2024-12-05 06:42:04.122612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.666 [2024-12-05 06:42:04.122637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:08.926 [2024-12-05 06:42:04.137524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e5220 00:17:08.926 [2024-12-05 06:42:04.137864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.926 [2024-12-05 06:42:04.137887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:08.926 [2024-12-05 06:42:04.151820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e4de8 00:17:08.926 [2024-12-05 06:42:04.152114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.926 [2024-12-05 06:42:04.152140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:08.926 [2024-12-05 06:42:04.166187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e49b0 00:17:08.926 [2024-12-05 06:42:04.166491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.926 [2024-12-05 06:42:04.166517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:08.926 [2024-12-05 06:42:04.180772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e4578 00:17:08.926 [2024-12-05 06:42:04.181048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.926 [2024-12-05 06:42:04.181073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:08.927 [2024-12-05 06:42:04.195027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e4140 00:17:08.927 [2024-12-05 06:42:04.195347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.927 [2024-12-05 06:42:04.195390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:08.927 [2024-12-05 06:42:04.209468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e3d08 00:17:08.927 [2024-12-05 06:42:04.209730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.927 [2024-12-05 06:42:04.209759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:08.927 [2024-12-05 06:42:04.224524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e38d0 00:17:08.927 [2024-12-05 06:42:04.224781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.927 [2024-12-05 06:42:04.224822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:08.927 [2024-12-05 06:42:04.239427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e3498 00:17:08.927 [2024-12-05 06:42:04.239697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.927 [2024-12-05 06:42:04.239722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:08.927 [2024-12-05 06:42:04.253619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e3060 00:17:08.927 [2024-12-05 06:42:04.253871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.927 [2024-12-05 06:42:04.253896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:08.927 [2024-12-05 06:42:04.267823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e2c28 00:17:08.927 [2024-12-05 06:42:04.268049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.927 [2024-12-05 06:42:04.268069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:08.927 [2024-12-05 06:42:04.282957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e27f0 00:17:08.927 [2024-12-05 06:42:04.283173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.927 [2024-12-05 06:42:04.283196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:08.927 [2024-12-05 06:42:04.297143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e23b8 00:17:08.927 [2024-12-05 06:42:04.297377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.927 [2024-12-05 06:42:04.297398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:08.927 [2024-12-05 06:42:04.313355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e1f80 00:17:08.927 [2024-12-05 06:42:04.313571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.927 [2024-12-05 06:42:04.313598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:08.927 [2024-12-05 06:42:04.328947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e1b48 00:17:08.927 [2024-12-05 06:42:04.329183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.927 [2024-12-05 06:42:04.329204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:08.927 [2024-12-05 06:42:04.343884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e1710 00:17:08.927 [2024-12-05 06:42:04.344071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.927 [2024-12-05 06:42:04.344092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:08.927 [2024-12-05 06:42:04.358796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e12d8 00:17:08.927 [2024-12-05 06:42:04.358978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.927 [2024-12-05 06:42:04.358999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:08.927 [2024-12-05 06:42:04.373363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e0ea0 00:17:08.927 [2024-12-05 06:42:04.373537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.927 [2024-12-05 06:42:04.373557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:08.927 [2024-12-05 06:42:04.388634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e0a68 00:17:08.927 [2024-12-05 06:42:04.388855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.927 [2024-12-05 06:42:04.388879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:09.187 [2024-12-05 06:42:04.404180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e0630 00:17:09.187 [2024-12-05 06:42:04.404338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.187 [2024-12-05 06:42:04.404378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:09.187 [2024-12-05 06:42:04.419333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190e01f8 00:17:09.187 [2024-12-05 06:42:04.419500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.187 [2024-12-05 06:42:04.419525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:09.187 [2024-12-05 06:42:04.434135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190dfdc0 00:17:09.187 [2024-12-05 06:42:04.434288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.187 [2024-12-05 06:42:04.434309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:09.187 [2024-12-05 06:42:04.449971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190df988 00:17:09.187 [2024-12-05 06:42:04.450103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.187 [2024-12-05 06:42:04.450124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:09.187 [2024-12-05 06:42:04.464345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190df550 00:17:09.187 [2024-12-05 06:42:04.464466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.187 [2024-12-05 06:42:04.464487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:09.187 [2024-12-05 06:42:04.478512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190df118 00:17:09.187 [2024-12-05 06:42:04.478619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.187 [2024-12-05 06:42:04.478639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:09.187 [2024-12-05 06:42:04.493199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190dece0 00:17:09.187 [2024-12-05 06:42:04.493315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.187 [2024-12-05 06:42:04.493336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:09.187 [2024-12-05 06:42:04.509143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190de8a8 00:17:09.187 [2024-12-05 06:42:04.509232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.187 [2024-12-05 06:42:04.509253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:09.187 [2024-12-05 06:42:04.524679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190de038 00:17:09.187 [2024-12-05 06:42:04.524780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.187 [2024-12-05 06:42:04.524803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:09.187 [2024-12-05 06:42:04.548227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190de038 00:17:09.187 [2024-12-05 06:42:04.549654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.187 [2024-12-05 06:42:04.549720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:09.187 [2024-12-05 06:42:04.563331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190de470 00:17:09.187 [2024-12-05 06:42:04.564669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.187 [2024-12-05 06:42:04.564730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.187 [2024-12-05 06:42:04.578121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190de8a8 00:17:09.187 [2024-12-05 06:42:04.579527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.187 [2024-12-05 06:42:04.579561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:09.187 [2024-12-05 06:42:04.593273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a4b160) with pdu=0x2000190dece0 00:17:09.187 [2024-12-05 06:42:04.594647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.187 [2024-12-05 06:42:04.594693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:09.187 00:17:09.187 Latency(us) 00:17:09.187 [2024-12-05T06:42:04.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.187 [2024-12-05T06:42:04.653Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:09.187 nvme0n1 : 2.00 16741.51 65.40 0.00 0.00 7639.73 6791.91 23950.43 00:17:09.187 [2024-12-05T06:42:04.653Z] =================================================================================================================== 00:17:09.187 [2024-12-05T06:42:04.653Z] Total : 16741.51 65.40 0.00 0.00 7639.73 6791.91 23950.43 00:17:09.187 0 00:17:09.187 06:42:04 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:09.187 06:42:04 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:09.187 06:42:04 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:09.187 | .driver_specific 00:17:09.187 | .nvme_error 00:17:09.187 | .status_code 00:17:09.187 | .command_transient_transport_error' 00:17:09.187 06:42:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:09.446 06:42:04 -- host/digest.sh@71 -- # (( 131 > 0 )) 00:17:09.446 06:42:04 -- host/digest.sh@73 -- # killprocess 83603 00:17:09.446 06:42:04 -- common/autotest_common.sh@936 -- # '[' -z 83603 ']' 00:17:09.446 06:42:04 -- common/autotest_common.sh@940 -- # kill -0 83603 00:17:09.446 06:42:04 -- common/autotest_common.sh@941 -- # uname 00:17:09.446 06:42:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:09.446 06:42:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83603 00:17:09.704 06:42:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:09.704 killing process with pid 83603 00:17:09.704 06:42:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:09.704 06:42:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83603' 00:17:09.705 06:42:04 -- common/autotest_common.sh@955 -- # kill 83603 00:17:09.705 Received shutdown signal, test time was about 2.000000 seconds 00:17:09.705 00:17:09.705 Latency(us) 00:17:09.705 [2024-12-05T06:42:05.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.705 [2024-12-05T06:42:05.171Z] =================================================================================================================== 00:17:09.705 [2024-12-05T06:42:05.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.705 06:42:04 -- common/autotest_common.sh@960 -- # wait 83603 00:17:09.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:09.705 06:42:05 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:17:09.705 06:42:05 -- host/digest.sh@54 -- # local rw bs qd 00:17:09.705 06:42:05 -- host/digest.sh@56 -- # rw=randwrite 00:17:09.705 06:42:05 -- host/digest.sh@56 -- # bs=131072 00:17:09.705 06:42:05 -- host/digest.sh@56 -- # qd=16 00:17:09.705 06:42:05 -- host/digest.sh@58 -- # bperfpid=83663 00:17:09.705 06:42:05 -- host/digest.sh@60 -- # waitforlisten 83663 /var/tmp/bperf.sock 00:17:09.705 06:42:05 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:09.705 06:42:05 -- common/autotest_common.sh@829 -- # '[' -z 83663 ']' 00:17:09.705 06:42:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:09.705 06:42:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.705 06:42:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:09.705 06:42:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.705 06:42:05 -- common/autotest_common.sh@10 -- # set +x 00:17:09.705 [2024-12-05 06:42:05.106652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:09.705 [2024-12-05 06:42:05.106977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83663 ] 00:17:09.705 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:09.705 Zero copy mechanism will not be used. 00:17:09.963 [2024-12-05 06:42:05.243987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.963 [2024-12-05 06:42:05.275869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.963 06:42:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.963 06:42:05 -- common/autotest_common.sh@862 -- # return 0 00:17:09.963 06:42:05 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:09.963 06:42:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:10.220 06:42:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:10.220 06:42:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.220 06:42:05 -- common/autotest_common.sh@10 -- # set +x 00:17:10.220 06:42:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.220 06:42:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:10.220 06:42:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:10.786 nvme0n1 00:17:10.786 06:42:06 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:10.786 06:42:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.786 06:42:06 -- common/autotest_common.sh@10 -- # set +x 00:17:10.786 06:42:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.786 06:42:06 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:10.786 06:42:06 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:10.786 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:10.786 Zero copy mechanism will not be used. 00:17:10.786 Running I/O for 2 seconds... 00:17:10.786 [2024-12-05 06:42:06.160629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.786 [2024-12-05 06:42:06.160951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.786 [2024-12-05 06:42:06.160981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.786 [2024-12-05 06:42:06.165686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.786 [2024-12-05 06:42:06.165972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.786 [2024-12-05 06:42:06.166016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.786 [2024-12-05 06:42:06.170542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.786 [2024-12-05 06:42:06.170813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.786 [2024-12-05 06:42:06.170840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.787 [2024-12-05 06:42:06.175503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.787 [2024-12-05 06:42:06.175798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.787 [2024-12-05 06:42:06.175827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.787 [2024-12-05 06:42:06.180488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.787 [2024-12-05 06:42:06.180787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.787 [2024-12-05 06:42:06.180831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.787 [2024-12-05 06:42:06.185457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.787 [2024-12-05 06:42:06.185745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.787 [2024-12-05 06:42:06.185773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.787 [2024-12-05 06:42:06.190193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.787 [2024-12-05 06:42:06.190508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.787 [2024-12-05 06:42:06.190536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.787 [2024-12-05 06:42:06.195183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.787 [2024-12-05 06:42:06.195528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.787 [2024-12-05 06:42:06.195574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.787 [2024-12-05 06:42:06.200178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.787 [2024-12-05 06:42:06.200666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.787 [2024-12-05 06:42:06.200690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.787 [2024-12-05 06:42:06.205328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.787 [2024-12-05 06:42:06.205627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.787 [2024-12-05 06:42:06.205654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.787 [2024-12-05 06:42:06.210213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.787 [2024-12-05 06:42:06.210537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.787 [2024-12-05 06:42:06.210582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.787 [2024-12-05 06:42:06.215349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.787 [2024-12-05 06:42:06.215660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.787 [2024-12-05 06:42:06.215686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.787 [2024-12-05 06:42:06.220267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.787 [2024-12-05 06:42:06.220720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.787 [2024-12-05 06:42:06.220743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.787 [2024-12-05 06:42:06.225339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.787 [2024-12-05 06:42:06.225642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.787 [2024-12-05 06:42:06.225669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.787 [2024-12-05 06:42:06.230204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.787 [2024-12-05 06:42:06.230504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.787 [2024-12-05 06:42:06.230532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.787 [2024-12-05 06:42:06.235229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.787 [2024-12-05 06:42:06.235575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.787 [2024-12-05 06:42:06.235618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.787 [2024-12-05 06:42:06.240288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.787 [2024-12-05 06:42:06.240808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.787 [2024-12-05 06:42:06.240871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.787 [2024-12-05 06:42:06.245563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:10.787 [2024-12-05 06:42:06.245851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.787 [2024-12-05 06:42:06.245878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.047 [2024-12-05 06:42:06.250794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.047 [2024-12-05 06:42:06.251069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.047 [2024-12-05 06:42:06.251098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.047 [2024-12-05 06:42:06.255926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.047 [2024-12-05 06:42:06.256385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.047 [2024-12-05 06:42:06.256412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.047 [2024-12-05 06:42:06.261065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.047 [2024-12-05 06:42:06.261381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.047 [2024-12-05 06:42:06.261409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.047 [2024-12-05 06:42:06.266082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.047 [2024-12-05 06:42:06.266399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.047 [2024-12-05 06:42:06.266427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.047 [2024-12-05 06:42:06.271040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.047 [2024-12-05 06:42:06.271375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.047 [2024-12-05 06:42:06.271406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.047 [2024-12-05 06:42:06.276059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.047 [2024-12-05 06:42:06.276516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.047 [2024-12-05 06:42:06.276539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.047 [2024-12-05 06:42:06.281102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.281451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.281492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.286122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.286438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.286466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.291044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.291373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.291400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.295929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.296371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.296395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.300934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.301240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.301266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.305766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.306054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.306080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.310655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.310943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.310971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.315510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.315842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.315886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.320474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.320771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.320813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.325395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.325691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.325718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.330294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.330629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.330660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.335423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.335735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.335762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.340307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.340806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.340829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.345550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.345872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.345928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.351046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.351453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.351484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.356588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.356912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.356940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.361792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.362098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.362125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.366673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.366961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.366988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.371958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.372441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.372495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.377498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.377770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.377799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.382399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.382693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.382721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.387450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.387779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.387818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.392441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.392727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.392755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.397327] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.397608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.397635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.402138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.402434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.402461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.407019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.407360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.407388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.412012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.412505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.048 [2024-12-05 06:42:06.412528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.048 [2024-12-05 06:42:06.417117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.048 [2024-12-05 06:42:06.417415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.417443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.421904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.422188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.422216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.426790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.427074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.427101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.431843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.432145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.432173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.436835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.437122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.437149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.441700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.441997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.442025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.446584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.446868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.446912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.451372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.451665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.451721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.456265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.456750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.456773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.461462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.461737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.461763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.466268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.466610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.466642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.471196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.471530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.471573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.476124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.476629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.476653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.481252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.481586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.481614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.486285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.486634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.486683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.491310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.491662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.491704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.496249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.496721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.496744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.501235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.501534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.501562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.049 [2024-12-05 06:42:06.506088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.049 [2024-12-05 06:42:06.506450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.049 [2024-12-05 06:42:06.506493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.511580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.511954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.511984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.516714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.517033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.517064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.521745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.522032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.522059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.526619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.526921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.526950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.532145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.532515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.532544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.537818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.538107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.538134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.542671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.542954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.542981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.548386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.548714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.548742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.553977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.554462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.554486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.559754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.560096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.560137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.565349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.565672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.565731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.571034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.571397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.571428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.576567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.576917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.576946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.581976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.582445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.582470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.587733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.588035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.588063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.593236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.593612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.593649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.599151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.599512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.599543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.605205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.605567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.605597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.611117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.611465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.611497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.616702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.617058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.310 [2024-12-05 06:42:06.617088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.310 [2024-12-05 06:42:06.622267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.310 [2024-12-05 06:42:06.622771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.622848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.628191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.628537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.628572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.634097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.634639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.634666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.639757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.640092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.640122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.644863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.645166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.645194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.650269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.650733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.650757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.655368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.655691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.655749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.660784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.661077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.661105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.665860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.666331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.666372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.671404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.671747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.671775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.676807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.677125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.677185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.682397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.682709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.682738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.687956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.688328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.688368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.693678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.694015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.694045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.699472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.699841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.699872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.704974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.705481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.705521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.710796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.711099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.711159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.715922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.716236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.716265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.721133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.721637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.721661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.726382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.726679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.726707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.731804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.732124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.732154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.737069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.737551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.737576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.742274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.742577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.742604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.747463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.747775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.747833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.752473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.752737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.752795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.757263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.757758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.757796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.762503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.762787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.311 [2024-12-05 06:42:06.762814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.311 [2024-12-05 06:42:06.767260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.311 [2024-12-05 06:42:06.767638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.312 [2024-12-05 06:42:06.767667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.572 [2024-12-05 06:42:06.773350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.572 [2024-12-05 06:42:06.773763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.572 [2024-12-05 06:42:06.773827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.572 [2024-12-05 06:42:06.779311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.572 [2024-12-05 06:42:06.779686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.572 [2024-12-05 06:42:06.779748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.572 [2024-12-05 06:42:06.785071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.572 [2024-12-05 06:42:06.785689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.572 [2024-12-05 06:42:06.785743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.572 [2024-12-05 06:42:06.791435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.572 [2024-12-05 06:42:06.791771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.572 [2024-12-05 06:42:06.791828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.572 [2024-12-05 06:42:06.797239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.572 [2024-12-05 06:42:06.797755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.572 [2024-12-05 06:42:06.797814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.572 [2024-12-05 06:42:06.802932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.572 [2024-12-05 06:42:06.803230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.572 [2024-12-05 06:42:06.803257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.572 [2024-12-05 06:42:06.808267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.572 [2024-12-05 06:42:06.808634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.572 [2024-12-05 06:42:06.808663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.572 [2024-12-05 06:42:06.813505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.572 [2024-12-05 06:42:06.813817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.572 [2024-12-05 06:42:06.813844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.572 [2024-12-05 06:42:06.818850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.572 [2024-12-05 06:42:06.819127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.572 [2024-12-05 06:42:06.819154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.572 [2024-12-05 06:42:06.823970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.572 [2024-12-05 06:42:06.824261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.572 [2024-12-05 06:42:06.824289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.572 [2024-12-05 06:42:06.829170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.572 [2024-12-05 06:42:06.829647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.572 [2024-12-05 06:42:06.829670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.572 [2024-12-05 06:42:06.834396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.572 [2024-12-05 06:42:06.834709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.572 [2024-12-05 06:42:06.834737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.572 [2024-12-05 06:42:06.839698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.839997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.840056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.844815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.845116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.845144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.850014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.850291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.850359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.854916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.855211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.855238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.860094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.860386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.860445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.865205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.865722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.865776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.870713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.871057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.871087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.876004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.876302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.876340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.881225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.881691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.881714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.886476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.886776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.886805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.891981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.892297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.892340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.897304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.897774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.897801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.902733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.903046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.903075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.907919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.908229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.908258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.913240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.913716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.913754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.918432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.918724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.918752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.923548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.923871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.923898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.928547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.928850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.928877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.933533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.933819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.933846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.938388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.938671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.938698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.943363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.943697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.943724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.948462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.948731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.948758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.953407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.953679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.953737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.958266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.958559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.958587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.963210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.963558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.963587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.968134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.968617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.968642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.573 [2024-12-05 06:42:06.973286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.573 [2024-12-05 06:42:06.973612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.573 [2024-12-05 06:42:06.973640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.574 [2024-12-05 06:42:06.978310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.574 [2024-12-05 06:42:06.978612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.574 [2024-12-05 06:42:06.978639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.574 [2024-12-05 06:42:06.983210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.574 [2024-12-05 06:42:06.983570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.574 [2024-12-05 06:42:06.983599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.574 [2024-12-05 06:42:06.988239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.574 [2024-12-05 06:42:06.988694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.574 [2024-12-05 06:42:06.988717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.574 [2024-12-05 06:42:06.993301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.574 [2024-12-05 06:42:06.993596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.574 [2024-12-05 06:42:06.993623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.574 [2024-12-05 06:42:06.998139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.574 [2024-12-05 06:42:06.998437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.574 [2024-12-05 06:42:06.998465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.574 [2024-12-05 06:42:07.003017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.574 [2024-12-05 06:42:07.003351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.574 [2024-12-05 06:42:07.003378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.574 [2024-12-05 06:42:07.007761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.574 [2024-12-05 06:42:07.008043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.574 [2024-12-05 06:42:07.008070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.574 [2024-12-05 06:42:07.012813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.574 [2024-12-05 06:42:07.013112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.574 [2024-12-05 06:42:07.013141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.574 [2024-12-05 06:42:07.017721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.574 [2024-12-05 06:42:07.018004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.574 [2024-12-05 06:42:07.018032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.574 [2024-12-05 06:42:07.022744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.574 [2024-12-05 06:42:07.023082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.574 [2024-12-05 06:42:07.023127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.574 [2024-12-05 06:42:07.028242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.574 [2024-12-05 06:42:07.028787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.574 [2024-12-05 06:42:07.028831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.574 [2024-12-05 06:42:07.034436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.574 [2024-12-05 06:42:07.034725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.574 [2024-12-05 06:42:07.034754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.835 [2024-12-05 06:42:07.040254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.835 [2024-12-05 06:42:07.040757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.835 [2024-12-05 06:42:07.040817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.835 [2024-12-05 06:42:07.046253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.835 [2024-12-05 06:42:07.046581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.835 [2024-12-05 06:42:07.046642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.835 [2024-12-05 06:42:07.052084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.835 [2024-12-05 06:42:07.052636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.835 [2024-12-05 06:42:07.052668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.835 [2024-12-05 06:42:07.057880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.835 [2024-12-05 06:42:07.058274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.835 [2024-12-05 06:42:07.058301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.835 [2024-12-05 06:42:07.063591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.835 [2024-12-05 06:42:07.063950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.835 [2024-12-05 06:42:07.063980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.835 [2024-12-05 06:42:07.069074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.835 [2024-12-05 06:42:07.069441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.835 [2024-12-05 06:42:07.069478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.835 [2024-12-05 06:42:07.074569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.835 [2024-12-05 06:42:07.074913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.835 [2024-12-05 06:42:07.074943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.835 [2024-12-05 06:42:07.080297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.835 [2024-12-05 06:42:07.080818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.835 [2024-12-05 06:42:07.080843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.835 [2024-12-05 06:42:07.086248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.835 [2024-12-05 06:42:07.086586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.835 [2024-12-05 06:42:07.086635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.835 [2024-12-05 06:42:07.091896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.835 [2024-12-05 06:42:07.092408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.835 [2024-12-05 06:42:07.092433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.835 [2024-12-05 06:42:07.097523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.835 [2024-12-05 06:42:07.097845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.835 [2024-12-05 06:42:07.097874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.835 [2024-12-05 06:42:07.102598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.835 [2024-12-05 06:42:07.102893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.835 [2024-12-05 06:42:07.102920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.835 [2024-12-05 06:42:07.107528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.835 [2024-12-05 06:42:07.107830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.835 [2024-12-05 06:42:07.107887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.835 [2024-12-05 06:42:07.112440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.835 [2024-12-05 06:42:07.112728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.835 [2024-12-05 06:42:07.112754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.835 [2024-12-05 06:42:07.117285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.835 [2024-12-05 06:42:07.117615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.117643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.122123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.122464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.122492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.127021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.127345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.127372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.132000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.132470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.132494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.137061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.137360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.137386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.141910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.142217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.142244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.146930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.147201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.147230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.152492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.152861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.152905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.159367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.159724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.159755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.164925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.165214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.165237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.169846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.170129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.170157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.174636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.174926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.174970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.179464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.179758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.179785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.184383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.184838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.184862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.189350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.189634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.189662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.194126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.194423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.194450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.198973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.199248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.199299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.203828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.204269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.204292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.208836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.209121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.209148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.213658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.213942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.213969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.218641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.218943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.218971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.223583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.223916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.223944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.228523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.228827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.228854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.233385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.233669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.233696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.836 [2024-12-05 06:42:07.238189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.836 [2024-12-05 06:42:07.238506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.836 [2024-12-05 06:42:07.238535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.837 [2024-12-05 06:42:07.242967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.837 [2024-12-05 06:42:07.243270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.837 [2024-12-05 06:42:07.243325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.837 [2024-12-05 06:42:07.247869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.837 [2024-12-05 06:42:07.248296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.837 [2024-12-05 06:42:07.248318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.837 [2024-12-05 06:42:07.252876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.837 [2024-12-05 06:42:07.253147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.837 [2024-12-05 06:42:07.253174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.837 [2024-12-05 06:42:07.257723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.837 [2024-12-05 06:42:07.258013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.837 [2024-12-05 06:42:07.258042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.837 [2024-12-05 06:42:07.262594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.837 [2024-12-05 06:42:07.262898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.837 [2024-12-05 06:42:07.262926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.837 [2024-12-05 06:42:07.267426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.837 [2024-12-05 06:42:07.267736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.837 [2024-12-05 06:42:07.267762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.837 [2024-12-05 06:42:07.272303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.837 [2024-12-05 06:42:07.272739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.837 [2024-12-05 06:42:07.272762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.837 [2024-12-05 06:42:07.277230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.837 [2024-12-05 06:42:07.277519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.837 [2024-12-05 06:42:07.277546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.837 [2024-12-05 06:42:07.281980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.837 [2024-12-05 06:42:07.282270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.837 [2024-12-05 06:42:07.282296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.837 [2024-12-05 06:42:07.287121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.837 [2024-12-05 06:42:07.287530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.837 [2024-12-05 06:42:07.287564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.837 [2024-12-05 06:42:07.292107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.837 [2024-12-05 06:42:07.292578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.837 [2024-12-05 06:42:07.292601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.837 [2024-12-05 06:42:07.297481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:11.837 [2024-12-05 06:42:07.297778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.837 [2024-12-05 06:42:07.297821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.098 [2024-12-05 06:42:07.302637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.098 [2024-12-05 06:42:07.302911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.098 [2024-12-05 06:42:07.302955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.098 [2024-12-05 06:42:07.308230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.098 [2024-12-05 06:42:07.308694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.098 [2024-12-05 06:42:07.308720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.098 [2024-12-05 06:42:07.313518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.098 [2024-12-05 06:42:07.313827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.098 [2024-12-05 06:42:07.313856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.098 [2024-12-05 06:42:07.318550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.098 [2024-12-05 06:42:07.318860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.098 [2024-12-05 06:42:07.318888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.098 [2024-12-05 06:42:07.323753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.098 [2024-12-05 06:42:07.324058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.098 [2024-12-05 06:42:07.324087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.098 [2024-12-05 06:42:07.328817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.098 [2024-12-05 06:42:07.329130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.098 [2024-12-05 06:42:07.329158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.098 [2024-12-05 06:42:07.333879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.098 [2024-12-05 06:42:07.334181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.098 [2024-12-05 06:42:07.334209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.098 [2024-12-05 06:42:07.338865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.098 [2024-12-05 06:42:07.339150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.098 [2024-12-05 06:42:07.339177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.098 [2024-12-05 06:42:07.343704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.098 [2024-12-05 06:42:07.343988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.098 [2024-12-05 06:42:07.344015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.098 [2024-12-05 06:42:07.348669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.098 [2024-12-05 06:42:07.348976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.098 [2024-12-05 06:42:07.349004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.098 [2024-12-05 06:42:07.353527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.098 [2024-12-05 06:42:07.353812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.098 [2024-12-05 06:42:07.353839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.098 [2024-12-05 06:42:07.358302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.358597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.358624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.363047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.363385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.363413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.367967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.368431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.368455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.372933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.373240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.373267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.377797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.378088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.378115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.382652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.382952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.382979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.387568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.387932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.387960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.392560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.392861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.392888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.397388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.397672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.397699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.402191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.402510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.402537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.407324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.407638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.407668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.412572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.412876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.412906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.417507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.417799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.417828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.422468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.422759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.422788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.427384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.427714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.427742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.432354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.432655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.432683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.437258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.437571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.437599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.442116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.442410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.442436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.446855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.447165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.447188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.451760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.452070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.452093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.456657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.456956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.456984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.461474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.461757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.461784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.466287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.466591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.466619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.471052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.471368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.471395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.476005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.476285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.476325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.480851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.481118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.481145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.099 [2024-12-05 06:42:07.485580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.099 [2024-12-05 06:42:07.485888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.099 [2024-12-05 06:42:07.485916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.100 [2024-12-05 06:42:07.490455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.100 [2024-12-05 06:42:07.490744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.100 [2024-12-05 06:42:07.490771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.100 [2024-12-05 06:42:07.495456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.100 [2024-12-05 06:42:07.495811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.100 [2024-12-05 06:42:07.495839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.100 [2024-12-05 06:42:07.500603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.100 [2024-12-05 06:42:07.500929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.100 [2024-12-05 06:42:07.500952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.100 [2024-12-05 06:42:07.505721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.100 [2024-12-05 06:42:07.506035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.100 [2024-12-05 06:42:07.506063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.100 [2024-12-05 06:42:07.510675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.100 [2024-12-05 06:42:07.510978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.100 [2024-12-05 06:42:07.511007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.100 [2024-12-05 06:42:07.515781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.100 [2024-12-05 06:42:07.516090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.100 [2024-12-05 06:42:07.516113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.100 [2024-12-05 06:42:07.520653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.100 [2024-12-05 06:42:07.520951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.100 [2024-12-05 06:42:07.520979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.100 [2024-12-05 06:42:07.525450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.100 [2024-12-05 06:42:07.525729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.100 [2024-12-05 06:42:07.525756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.100 [2024-12-05 06:42:07.530307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.100 [2024-12-05 06:42:07.530603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.100 [2024-12-05 06:42:07.530631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.100 [2024-12-05 06:42:07.535136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.100 [2024-12-05 06:42:07.535467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.100 [2024-12-05 06:42:07.535491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.100 [2024-12-05 06:42:07.540070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.100 [2024-12-05 06:42:07.540384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.100 [2024-12-05 06:42:07.540407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.100 [2024-12-05 06:42:07.544932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.100 [2024-12-05 06:42:07.545214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.100 [2024-12-05 06:42:07.545257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.100 [2024-12-05 06:42:07.549724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.100 [2024-12-05 06:42:07.550027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.100 [2024-12-05 06:42:07.550055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.100 [2024-12-05 06:42:07.554514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.100 [2024-12-05 06:42:07.554801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.100 [2024-12-05 06:42:07.554828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.100 [2024-12-05 06:42:07.559690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.100 [2024-12-05 06:42:07.560023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.100 [2024-12-05 06:42:07.560048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.564948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.565230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.565260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.570015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.570349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.570373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.574884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.575163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.575191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.579865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.580168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.580213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.584955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.585261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.585306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.590124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.590467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.590496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.595421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.595755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.595778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.600954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.601240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.601268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.606301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.606655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.606714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.611726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.612020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.612047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.616976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.617255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.617282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.621965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.622267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.622293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.626961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.627257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.627308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.631977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.632279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.632307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.636859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.637179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.637208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.641745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.642049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.642085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.646616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.361 [2024-12-05 06:42:07.646895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.361 [2024-12-05 06:42:07.646922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.361 [2024-12-05 06:42:07.651549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.651922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.651944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.656450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.656736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.656763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.661344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.661623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.661649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.666768] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.667049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.667079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.672064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.672365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.672404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.676965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.677279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.677301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.681892] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.682196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.682220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.686797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.687102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.687130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.691820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.692123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.692150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.697262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.697580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.697608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.702827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.703152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.703203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.708364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.708704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.708732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.714342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.714687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.714715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.719941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.720282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.720309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.725358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.725630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.725657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.730745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.731069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.731099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.736083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.736400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.736438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.741879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.742260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.742290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.747487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.747861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.747890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.753009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.753351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.753387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.758355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.758653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.758681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.763525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.763875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.763898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.768729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.769036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.769065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.773711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.773989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.774046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.778881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.779168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.779195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.783861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.784179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.784202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.789007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.362 [2024-12-05 06:42:07.789314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.362 [2024-12-05 06:42:07.789352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.362 [2024-12-05 06:42:07.793999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.363 [2024-12-05 06:42:07.794322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.363 [2024-12-05 06:42:07.794359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.363 [2024-12-05 06:42:07.799109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.363 [2024-12-05 06:42:07.799443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.363 [2024-12-05 06:42:07.799471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.363 [2024-12-05 06:42:07.804151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.363 [2024-12-05 06:42:07.804513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.363 [2024-12-05 06:42:07.804542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.363 [2024-12-05 06:42:07.809283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.363 [2024-12-05 06:42:07.809588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.363 [2024-12-05 06:42:07.809615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.363 [2024-12-05 06:42:07.814357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.363 [2024-12-05 06:42:07.814723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.363 [2024-12-05 06:42:07.814752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.363 [2024-12-05 06:42:07.819563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.363 [2024-12-05 06:42:07.819911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.363 [2024-12-05 06:42:07.819934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.825226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.825536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.825568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.830561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.830882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.830913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.835924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.836244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.836267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.840947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.841241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.841270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.846217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.846517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.846546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.851210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.851549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.851573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.856521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.856828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.856856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.861748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.862068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.862097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.866722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.867001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.867028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.871739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.872058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.872082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.876639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.876944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.876971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.881635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.881942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.881970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.886528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.886815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.886843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.891357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.891628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.891655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.896263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.896569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.896596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.901137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.623 [2024-12-05 06:42:07.901415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.623 [2024-12-05 06:42:07.901441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.623 [2024-12-05 06:42:07.905932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.906227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.906254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.910705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.911014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.911043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.915685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.915985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.916012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.920934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.921240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.921267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.927220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.927592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.927625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.933095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.933504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.933530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.938624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.938955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.938984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.944007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.944342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.944366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.949210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.949514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.949542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.954324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.954643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.954672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.959239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.959581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.959625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.964454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.964766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.964789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.969357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.969667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.969695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.974269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.974609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.974637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.979436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.979730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.979772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.984377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.984665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.984692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.989507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.989809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.989838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.994433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.994711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.994739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:07.999274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:07.999598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:07.999641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:08.004321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:08.004651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:08.004673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:08.009216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:08.009522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:08.009554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:08.014152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:08.014522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:08.014557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:08.019226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:08.019593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:08.019630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:08.024265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:08.024632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:08.024670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:08.029370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:08.029729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:08.029767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:08.034467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:08.034811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:08.034849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.624 [2024-12-05 06:42:08.039578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.624 [2024-12-05 06:42:08.039970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.624 [2024-12-05 06:42:08.040005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.625 [2024-12-05 06:42:08.044641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.625 [2024-12-05 06:42:08.044983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.625 [2024-12-05 06:42:08.045019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.625 [2024-12-05 06:42:08.049629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.625 [2024-12-05 06:42:08.049983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.625 [2024-12-05 06:42:08.050020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.625 [2024-12-05 06:42:08.054749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.625 [2024-12-05 06:42:08.055084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.625 [2024-12-05 06:42:08.055120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.625 [2024-12-05 06:42:08.059756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.625 [2024-12-05 06:42:08.060097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.625 [2024-12-05 06:42:08.060134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.625 [2024-12-05 06:42:08.064781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.625 [2024-12-05 06:42:08.065105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.625 [2024-12-05 06:42:08.065140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.625 [2024-12-05 06:42:08.069903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.625 [2024-12-05 06:42:08.070250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.625 [2024-12-05 06:42:08.070286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.625 [2024-12-05 06:42:08.074837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.625 [2024-12-05 06:42:08.075178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.625 [2024-12-05 06:42:08.075211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.625 [2024-12-05 06:42:08.079875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.625 [2024-12-05 06:42:08.080211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.625 [2024-12-05 06:42:08.080241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.625 [2024-12-05 06:42:08.085168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.625 [2024-12-05 06:42:08.085554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.625 [2024-12-05 06:42:08.085591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.884 [2024-12-05 06:42:08.090438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.884 [2024-12-05 06:42:08.090764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-12-05 06:42:08.090797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.884 [2024-12-05 06:42:08.095512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.884 [2024-12-05 06:42:08.095898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-12-05 06:42:08.095937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.884 [2024-12-05 06:42:08.100459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.884 [2024-12-05 06:42:08.100799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-12-05 06:42:08.100829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.884 [2024-12-05 06:42:08.105405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.884 [2024-12-05 06:42:08.105719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-12-05 06:42:08.105751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.884 [2024-12-05 06:42:08.110208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.884 [2024-12-05 06:42:08.110569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-12-05 06:42:08.110605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.884 [2024-12-05 06:42:08.115017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.884 [2024-12-05 06:42:08.115399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-12-05 06:42:08.115432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.884 [2024-12-05 06:42:08.119903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.884 [2024-12-05 06:42:08.120243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-12-05 06:42:08.120273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.884 [2024-12-05 06:42:08.124890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.884 [2024-12-05 06:42:08.125222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-12-05 06:42:08.125258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.884 [2024-12-05 06:42:08.130351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.884 [2024-12-05 06:42:08.130802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-12-05 06:42:08.130840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.884 [2024-12-05 06:42:08.136009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.884 [2024-12-05 06:42:08.136413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-12-05 06:42:08.136478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.884 [2024-12-05 06:42:08.141727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.884 [2024-12-05 06:42:08.142082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-12-05 06:42:08.142120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.884 [2024-12-05 06:42:08.147280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.884 [2024-12-05 06:42:08.147700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-12-05 06:42:08.147736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.884 [2024-12-05 06:42:08.152689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a49e30) with pdu=0x2000190fef90 00:17:12.884 [2024-12-05 06:42:08.152832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.884 [2024-12-05 06:42:08.152865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.884 00:17:12.884 Latency(us) 00:17:12.884 [2024-12-05T06:42:08.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.884 [2024-12-05T06:42:08.350Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:12.884 nvme0n1 : 2.00 6006.27 750.78 0.00 0.00 2657.98 2129.92 8162.21 00:17:12.884 [2024-12-05T06:42:08.351Z] =================================================================================================================== 00:17:12.885 [2024-12-05T06:42:08.351Z] Total : 6006.27 750.78 0.00 0.00 2657.98 2129.92 8162.21 00:17:12.885 0 00:17:12.885 06:42:08 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:12.885 06:42:08 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:12.885 06:42:08 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:12.885 | .driver_specific 00:17:12.885 | .nvme_error 00:17:12.885 | .status_code 00:17:12.885 | .command_transient_transport_error' 00:17:12.885 06:42:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:13.144 06:42:08 -- host/digest.sh@71 -- # (( 388 > 0 )) 00:17:13.144 06:42:08 -- host/digest.sh@73 -- # killprocess 83663 00:17:13.144 06:42:08 -- common/autotest_common.sh@936 -- # '[' -z 83663 ']' 00:17:13.144 06:42:08 -- common/autotest_common.sh@940 -- # kill -0 83663 00:17:13.144 06:42:08 -- common/autotest_common.sh@941 -- # uname 00:17:13.144 06:42:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.144 06:42:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83663 00:17:13.144 06:42:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:13.144 06:42:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:13.144 killing process with pid 83663 00:17:13.144 06:42:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83663' 00:17:13.144 Received shutdown signal, test time was about 2.000000 seconds 00:17:13.144 00:17:13.144 Latency(us) 00:17:13.144 [2024-12-05T06:42:08.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.144 [2024-12-05T06:42:08.610Z] =================================================================================================================== 00:17:13.144 [2024-12-05T06:42:08.610Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:13.144 06:42:08 -- common/autotest_common.sh@955 -- # kill 83663 00:17:13.144 06:42:08 -- common/autotest_common.sh@960 -- # wait 83663 00:17:13.403 06:42:08 -- host/digest.sh@115 -- # killprocess 83459 00:17:13.403 06:42:08 -- common/autotest_common.sh@936 -- # '[' -z 83459 ']' 00:17:13.403 06:42:08 -- common/autotest_common.sh@940 -- # kill -0 83459 00:17:13.403 06:42:08 -- common/autotest_common.sh@941 -- # uname 00:17:13.403 06:42:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.403 06:42:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83459 00:17:13.403 06:42:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:13.403 06:42:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:13.403 killing process with pid 83459 00:17:13.403 06:42:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83459' 00:17:13.403 06:42:08 -- common/autotest_common.sh@955 -- # kill 83459 00:17:13.403 06:42:08 -- common/autotest_common.sh@960 -- # wait 83459 00:17:13.403 00:17:13.403 real 0m16.748s 00:17:13.403 user 0m33.072s 00:17:13.403 sys 0m4.550s 00:17:13.403 06:42:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:13.403 06:42:08 -- common/autotest_common.sh@10 -- # set +x 00:17:13.403 ************************************ 00:17:13.403 END TEST nvmf_digest_error 00:17:13.403 ************************************ 00:17:13.403 06:42:08 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:17:13.403 06:42:08 -- host/digest.sh@139 -- # nvmftestfini 00:17:13.403 06:42:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:13.403 06:42:08 -- nvmf/common.sh@116 -- # sync 00:17:13.662 06:42:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:13.662 06:42:08 -- nvmf/common.sh@119 -- # set +e 00:17:13.662 06:42:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:13.662 06:42:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:13.662 rmmod nvme_tcp 00:17:13.662 rmmod nvme_fabrics 00:17:13.662 rmmod nvme_keyring 00:17:13.662 06:42:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:13.662 06:42:08 -- nvmf/common.sh@123 -- # set -e 00:17:13.662 06:42:08 -- nvmf/common.sh@124 -- # return 0 00:17:13.662 06:42:08 -- nvmf/common.sh@477 -- # '[' -n 83459 ']' 00:17:13.662 06:42:08 -- nvmf/common.sh@478 -- # killprocess 83459 00:17:13.662 06:42:08 -- common/autotest_common.sh@936 -- # '[' -z 83459 ']' 00:17:13.662 06:42:08 -- common/autotest_common.sh@940 -- # kill -0 83459 00:17:13.662 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83459) - No such process 00:17:13.662 06:42:08 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83459 is not found' 00:17:13.662 Process with pid 83459 is not found 00:17:13.662 06:42:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:13.662 06:42:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:13.662 06:42:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:13.662 06:42:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.662 06:42:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:13.662 06:42:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.662 06:42:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.662 06:42:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.662 06:42:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:13.662 00:17:13.662 real 0m32.704s 00:17:13.662 user 1m2.694s 00:17:13.662 sys 0m9.208s 00:17:13.662 06:42:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:13.662 06:42:08 -- common/autotest_common.sh@10 -- # set +x 00:17:13.662 ************************************ 00:17:13.662 END TEST nvmf_digest 00:17:13.662 ************************************ 00:17:13.662 06:42:09 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:17:13.662 06:42:09 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:17:13.662 06:42:09 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:13.662 06:42:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:13.662 06:42:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:13.662 06:42:09 -- common/autotest_common.sh@10 -- # set +x 00:17:13.662 ************************************ 00:17:13.662 START TEST nvmf_multipath 00:17:13.662 ************************************ 00:17:13.662 06:42:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:13.662 * Looking for test storage... 00:17:13.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:13.662 06:42:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:13.662 06:42:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:13.662 06:42:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:13.922 06:42:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:13.922 06:42:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:13.922 06:42:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:13.922 06:42:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:13.922 06:42:09 -- scripts/common.sh@335 -- # IFS=.-: 00:17:13.922 06:42:09 -- scripts/common.sh@335 -- # read -ra ver1 00:17:13.922 06:42:09 -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.922 06:42:09 -- scripts/common.sh@336 -- # read -ra ver2 00:17:13.922 06:42:09 -- scripts/common.sh@337 -- # local 'op=<' 00:17:13.922 06:42:09 -- scripts/common.sh@339 -- # ver1_l=2 00:17:13.922 06:42:09 -- scripts/common.sh@340 -- # ver2_l=1 00:17:13.922 06:42:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:13.922 06:42:09 -- scripts/common.sh@343 -- # case "$op" in 00:17:13.922 06:42:09 -- scripts/common.sh@344 -- # : 1 00:17:13.922 06:42:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:13.922 06:42:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.922 06:42:09 -- scripts/common.sh@364 -- # decimal 1 00:17:13.922 06:42:09 -- scripts/common.sh@352 -- # local d=1 00:17:13.922 06:42:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.922 06:42:09 -- scripts/common.sh@354 -- # echo 1 00:17:13.922 06:42:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:13.922 06:42:09 -- scripts/common.sh@365 -- # decimal 2 00:17:13.922 06:42:09 -- scripts/common.sh@352 -- # local d=2 00:17:13.922 06:42:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.922 06:42:09 -- scripts/common.sh@354 -- # echo 2 00:17:13.922 06:42:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:13.922 06:42:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:13.922 06:42:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:13.922 06:42:09 -- scripts/common.sh@367 -- # return 0 00:17:13.922 06:42:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.922 06:42:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:13.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.922 --rc genhtml_branch_coverage=1 00:17:13.922 --rc genhtml_function_coverage=1 00:17:13.922 --rc genhtml_legend=1 00:17:13.922 --rc geninfo_all_blocks=1 00:17:13.922 --rc geninfo_unexecuted_blocks=1 00:17:13.922 00:17:13.922 ' 00:17:13.922 06:42:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:13.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.922 --rc genhtml_branch_coverage=1 00:17:13.922 --rc genhtml_function_coverage=1 00:17:13.922 --rc genhtml_legend=1 00:17:13.922 --rc geninfo_all_blocks=1 00:17:13.922 --rc geninfo_unexecuted_blocks=1 00:17:13.922 00:17:13.922 ' 00:17:13.922 06:42:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:13.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.922 --rc genhtml_branch_coverage=1 00:17:13.922 --rc genhtml_function_coverage=1 00:17:13.922 --rc genhtml_legend=1 00:17:13.922 --rc geninfo_all_blocks=1 00:17:13.922 --rc geninfo_unexecuted_blocks=1 00:17:13.922 00:17:13.922 ' 00:17:13.922 06:42:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:13.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.922 --rc genhtml_branch_coverage=1 00:17:13.922 --rc genhtml_function_coverage=1 00:17:13.922 --rc genhtml_legend=1 00:17:13.922 --rc geninfo_all_blocks=1 00:17:13.922 --rc geninfo_unexecuted_blocks=1 00:17:13.922 00:17:13.922 ' 00:17:13.922 06:42:09 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:13.922 06:42:09 -- nvmf/common.sh@7 -- # uname -s 00:17:13.922 06:42:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.922 06:42:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.922 06:42:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.922 06:42:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.922 06:42:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.922 06:42:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.922 06:42:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.922 06:42:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.922 06:42:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.922 06:42:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.922 06:42:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:17:13.922 06:42:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:17:13.922 06:42:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.922 06:42:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.922 06:42:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:13.922 06:42:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:13.922 06:42:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.922 06:42:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.922 06:42:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.923 06:42:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.923 06:42:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.923 06:42:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.923 06:42:09 -- paths/export.sh@5 -- # export PATH 00:17:13.923 06:42:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.923 06:42:09 -- nvmf/common.sh@46 -- # : 0 00:17:13.923 06:42:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:13.923 06:42:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:13.923 06:42:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:13.923 06:42:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.923 06:42:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.923 06:42:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:13.923 06:42:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:13.923 06:42:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:13.923 06:42:09 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:13.923 06:42:09 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:13.923 06:42:09 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.923 06:42:09 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:13.923 06:42:09 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.923 06:42:09 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:13.923 06:42:09 -- host/multipath.sh@30 -- # nvmftestinit 00:17:13.923 06:42:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:13.923 06:42:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.923 06:42:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:13.923 06:42:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:13.923 06:42:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:13.923 06:42:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.923 06:42:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.923 06:42:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.923 06:42:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:13.923 06:42:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:13.923 06:42:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:13.923 06:42:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:13.923 06:42:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:13.923 06:42:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:13.923 06:42:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.923 06:42:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.923 06:42:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:13.923 06:42:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:13.923 06:42:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:13.923 06:42:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:13.923 06:42:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:13.923 06:42:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.923 06:42:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:13.923 06:42:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:13.923 06:42:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:13.923 06:42:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:13.923 06:42:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:13.923 06:42:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:13.923 Cannot find device "nvmf_tgt_br" 00:17:13.923 06:42:09 -- nvmf/common.sh@154 -- # true 00:17:13.923 06:42:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:13.923 Cannot find device "nvmf_tgt_br2" 00:17:13.923 06:42:09 -- nvmf/common.sh@155 -- # true 00:17:13.923 06:42:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:13.923 06:42:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:13.923 Cannot find device "nvmf_tgt_br" 00:17:13.923 06:42:09 -- nvmf/common.sh@157 -- # true 00:17:13.923 06:42:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:13.923 Cannot find device "nvmf_tgt_br2" 00:17:13.923 06:42:09 -- nvmf/common.sh@158 -- # true 00:17:13.923 06:42:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:13.923 06:42:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:13.923 06:42:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:14.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.182 06:42:09 -- nvmf/common.sh@161 -- # true 00:17:14.182 06:42:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:14.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.182 06:42:09 -- nvmf/common.sh@162 -- # true 00:17:14.183 06:42:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:14.183 06:42:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:14.183 06:42:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:14.183 06:42:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:14.183 06:42:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:14.183 06:42:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:14.183 06:42:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:14.183 06:42:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:14.183 06:42:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:14.183 06:42:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:14.183 06:42:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:14.183 06:42:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:14.183 06:42:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:14.183 06:42:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:14.183 06:42:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:14.183 06:42:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:14.183 06:42:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:14.183 06:42:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:14.183 06:42:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:14.183 06:42:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:14.183 06:42:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:14.183 06:42:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:14.183 06:42:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:14.183 06:42:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:14.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:17:14.183 00:17:14.183 --- 10.0.0.2 ping statistics --- 00:17:14.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.183 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:14.183 06:42:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:14.183 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:14.183 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:17:14.183 00:17:14.183 --- 10.0.0.3 ping statistics --- 00:17:14.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.183 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:14.183 06:42:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:14.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:17:14.183 00:17:14.183 --- 10.0.0.1 ping statistics --- 00:17:14.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.183 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:14.183 06:42:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.183 06:42:09 -- nvmf/common.sh@421 -- # return 0 00:17:14.183 06:42:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:14.183 06:42:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.183 06:42:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:14.183 06:42:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:14.183 06:42:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.183 06:42:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:14.183 06:42:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:14.183 06:42:09 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:14.183 06:42:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:14.183 06:42:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:14.183 06:42:09 -- common/autotest_common.sh@10 -- # set +x 00:17:14.183 06:42:09 -- nvmf/common.sh@469 -- # nvmfpid=83922 00:17:14.183 06:42:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:14.183 06:42:09 -- nvmf/common.sh@470 -- # waitforlisten 83922 00:17:14.183 06:42:09 -- common/autotest_common.sh@829 -- # '[' -z 83922 ']' 00:17:14.183 06:42:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.183 06:42:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.183 06:42:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.183 06:42:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.183 06:42:09 -- common/autotest_common.sh@10 -- # set +x 00:17:14.183 [2024-12-05 06:42:09.625588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:14.183 [2024-12-05 06:42:09.625722] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.442 [2024-12-05 06:42:09.760446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:14.442 [2024-12-05 06:42:09.794842] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:14.442 [2024-12-05 06:42:09.795005] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.442 [2024-12-05 06:42:09.795018] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.442 [2024-12-05 06:42:09.795025] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.442 [2024-12-05 06:42:09.796377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.442 [2024-12-05 06:42:09.796396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.442 06:42:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.442 06:42:09 -- common/autotest_common.sh@862 -- # return 0 00:17:14.442 06:42:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:14.442 06:42:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:14.442 06:42:09 -- common/autotest_common.sh@10 -- # set +x 00:17:14.700 06:42:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.700 06:42:09 -- host/multipath.sh@33 -- # nvmfapp_pid=83922 00:17:14.700 06:42:09 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:14.959 [2024-12-05 06:42:10.196407] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.959 06:42:10 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:15.217 Malloc0 00:17:15.217 06:42:10 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:15.474 06:42:10 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:15.732 06:42:10 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.732 [2024-12-05 06:42:11.160135] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.732 06:42:11 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:16.297 [2024-12-05 06:42:11.484463] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:16.297 06:42:11 -- host/multipath.sh@44 -- # bdevperf_pid=83973 00:17:16.297 06:42:11 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.297 06:42:11 -- host/multipath.sh@47 -- # waitforlisten 83973 /var/tmp/bdevperf.sock 00:17:16.297 06:42:11 -- common/autotest_common.sh@829 -- # '[' -z 83973 ']' 00:17:16.297 06:42:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.297 06:42:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.297 06:42:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.297 06:42:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.297 06:42:11 -- common/autotest_common.sh@10 -- # set +x 00:17:16.297 06:42:11 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:17.230 06:42:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.230 06:42:12 -- common/autotest_common.sh@862 -- # return 0 00:17:17.230 06:42:12 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:17.487 06:42:12 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:17.745 Nvme0n1 00:17:17.745 06:42:13 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:18.002 Nvme0n1 00:17:18.002 06:42:13 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:18.002 06:42:13 -- host/multipath.sh@78 -- # sleep 1 00:17:18.935 06:42:14 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:18.935 06:42:14 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:19.196 06:42:14 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:19.454 06:42:14 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:19.454 06:42:14 -- host/multipath.sh@65 -- # dtrace_pid=84023 00:17:19.454 06:42:14 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83922 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:19.454 06:42:14 -- host/multipath.sh@66 -- # sleep 6 00:17:26.041 06:42:20 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:26.041 06:42:20 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:26.041 06:42:21 -- host/multipath.sh@67 -- # active_port=4421 00:17:26.041 06:42:21 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:26.041 Attaching 4 probes... 00:17:26.041 @path[10.0.0.2, 4421]: 19785 00:17:26.041 @path[10.0.0.2, 4421]: 20106 00:17:26.041 @path[10.0.0.2, 4421]: 19971 00:17:26.041 @path[10.0.0.2, 4421]: 20267 00:17:26.041 @path[10.0.0.2, 4421]: 20097 00:17:26.041 06:42:21 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:26.041 06:42:21 -- host/multipath.sh@69 -- # sed -n 1p 00:17:26.041 06:42:21 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:26.041 06:42:21 -- host/multipath.sh@69 -- # port=4421 00:17:26.041 06:42:21 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:26.041 06:42:21 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:26.041 06:42:21 -- host/multipath.sh@72 -- # kill 84023 00:17:26.041 06:42:21 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:26.041 06:42:21 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:26.041 06:42:21 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:26.041 06:42:21 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:26.300 06:42:21 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:26.300 06:42:21 -- host/multipath.sh@65 -- # dtrace_pid=84132 00:17:26.300 06:42:21 -- host/multipath.sh@66 -- # sleep 6 00:17:26.300 06:42:21 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83922 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:32.859 06:42:27 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:32.859 06:42:27 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:32.859 06:42:28 -- host/multipath.sh@67 -- # active_port=4420 00:17:32.859 06:42:28 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:32.859 Attaching 4 probes... 00:17:32.859 @path[10.0.0.2, 4420]: 20197 00:17:32.859 @path[10.0.0.2, 4420]: 20186 00:17:32.859 @path[10.0.0.2, 4420]: 20313 00:17:32.859 @path[10.0.0.2, 4420]: 20099 00:17:32.859 @path[10.0.0.2, 4420]: 20143 00:17:32.859 06:42:28 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:32.859 06:42:28 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:32.859 06:42:28 -- host/multipath.sh@69 -- # sed -n 1p 00:17:32.859 06:42:28 -- host/multipath.sh@69 -- # port=4420 00:17:32.859 06:42:28 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:32.859 06:42:28 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:32.859 06:42:28 -- host/multipath.sh@72 -- # kill 84132 00:17:32.859 06:42:28 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:32.859 06:42:28 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:32.859 06:42:28 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:32.859 06:42:28 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:33.117 06:42:28 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:33.117 06:42:28 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83922 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:33.117 06:42:28 -- host/multipath.sh@65 -- # dtrace_pid=84250 00:17:33.117 06:42:28 -- host/multipath.sh@66 -- # sleep 6 00:17:39.702 06:42:34 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:39.702 06:42:34 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:39.702 06:42:34 -- host/multipath.sh@67 -- # active_port=4421 00:17:39.702 06:42:34 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:39.702 Attaching 4 probes... 00:17:39.702 @path[10.0.0.2, 4421]: 15713 00:17:39.702 @path[10.0.0.2, 4421]: 19889 00:17:39.702 @path[10.0.0.2, 4421]: 19959 00:17:39.702 @path[10.0.0.2, 4421]: 20018 00:17:39.702 @path[10.0.0.2, 4421]: 20411 00:17:39.702 06:42:34 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:39.702 06:42:34 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:39.702 06:42:34 -- host/multipath.sh@69 -- # sed -n 1p 00:17:39.702 06:42:34 -- host/multipath.sh@69 -- # port=4421 00:17:39.702 06:42:34 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:39.702 06:42:34 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:39.702 06:42:34 -- host/multipath.sh@72 -- # kill 84250 00:17:39.702 06:42:34 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:39.702 06:42:34 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:39.702 06:42:34 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:39.702 06:42:35 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:39.960 06:42:35 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:39.960 06:42:35 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83922 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:39.960 06:42:35 -- host/multipath.sh@65 -- # dtrace_pid=84369 00:17:39.960 06:42:35 -- host/multipath.sh@66 -- # sleep 6 00:17:46.517 06:42:41 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:46.517 06:42:41 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:46.517 06:42:41 -- host/multipath.sh@67 -- # active_port= 00:17:46.517 06:42:41 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:46.517 Attaching 4 probes... 00:17:46.517 00:17:46.517 00:17:46.517 00:17:46.517 00:17:46.517 00:17:46.517 06:42:41 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:46.517 06:42:41 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:46.517 06:42:41 -- host/multipath.sh@69 -- # sed -n 1p 00:17:46.517 06:42:41 -- host/multipath.sh@69 -- # port= 00:17:46.517 06:42:41 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:46.517 06:42:41 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:46.517 06:42:41 -- host/multipath.sh@72 -- # kill 84369 00:17:46.517 06:42:41 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:46.517 06:42:41 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:46.517 06:42:41 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:46.517 06:42:41 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:46.776 06:42:42 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:46.776 06:42:42 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83922 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:46.776 06:42:42 -- host/multipath.sh@65 -- # dtrace_pid=84481 00:17:46.776 06:42:42 -- host/multipath.sh@66 -- # sleep 6 00:17:53.333 06:42:48 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:53.333 06:42:48 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:53.333 06:42:48 -- host/multipath.sh@67 -- # active_port=4421 00:17:53.333 06:42:48 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:53.333 Attaching 4 probes... 00:17:53.333 @path[10.0.0.2, 4421]: 19497 00:17:53.333 @path[10.0.0.2, 4421]: 19729 00:17:53.333 @path[10.0.0.2, 4421]: 20173 00:17:53.333 @path[10.0.0.2, 4421]: 19790 00:17:53.333 @path[10.0.0.2, 4421]: 19620 00:17:53.333 06:42:48 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:53.333 06:42:48 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:53.333 06:42:48 -- host/multipath.sh@69 -- # sed -n 1p 00:17:53.333 06:42:48 -- host/multipath.sh@69 -- # port=4421 00:17:53.333 06:42:48 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:53.333 06:42:48 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:53.333 06:42:48 -- host/multipath.sh@72 -- # kill 84481 00:17:53.333 06:42:48 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:53.334 06:42:48 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:53.334 [2024-12-05 06:42:48.696719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696861] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696892] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696900] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696923] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696931] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696983] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.696997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.697005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.697012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.697019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.697026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.697033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.697041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.697048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.697055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.697063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.697070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.697077] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.697085] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.697094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 [2024-12-05 06:42:48.697101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9f7a0 is same with the state(5) to be set 00:17:53.334 06:42:48 -- host/multipath.sh@101 -- # sleep 1 00:17:54.271 06:42:49 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:54.271 06:42:49 -- host/multipath.sh@65 -- # dtrace_pid=84610 00:17:54.271 06:42:49 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83922 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:54.271 06:42:49 -- host/multipath.sh@66 -- # sleep 6 00:18:00.861 06:42:55 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:00.861 06:42:55 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:00.861 06:42:56 -- host/multipath.sh@67 -- # active_port=4420 00:18:00.861 06:42:56 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:00.861 Attaching 4 probes... 00:18:00.861 @path[10.0.0.2, 4420]: 18609 00:18:00.861 @path[10.0.0.2, 4420]: 19576 00:18:00.861 @path[10.0.0.2, 4420]: 19479 00:18:00.861 @path[10.0.0.2, 4420]: 19608 00:18:00.861 @path[10.0.0.2, 4420]: 19830 00:18:00.861 06:42:56 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:00.861 06:42:56 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:00.861 06:42:56 -- host/multipath.sh@69 -- # sed -n 1p 00:18:00.861 06:42:56 -- host/multipath.sh@69 -- # port=4420 00:18:00.861 06:42:56 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:00.861 06:42:56 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:00.861 06:42:56 -- host/multipath.sh@72 -- # kill 84610 00:18:00.861 06:42:56 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:00.861 06:42:56 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:00.861 [2024-12-05 06:42:56.304903] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:01.120 06:42:56 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:01.379 06:42:56 -- host/multipath.sh@111 -- # sleep 6 00:18:07.943 06:43:02 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:07.943 06:43:02 -- host/multipath.sh@65 -- # dtrace_pid=84784 00:18:07.943 06:43:02 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 83922 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:07.943 06:43:02 -- host/multipath.sh@66 -- # sleep 6 00:18:13.268 06:43:08 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:13.268 06:43:08 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:13.527 06:43:08 -- host/multipath.sh@67 -- # active_port=4421 00:18:13.527 06:43:08 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:13.527 Attaching 4 probes... 00:18:13.527 @path[10.0.0.2, 4421]: 19239 00:18:13.527 @path[10.0.0.2, 4421]: 19778 00:18:13.527 @path[10.0.0.2, 4421]: 19536 00:18:13.527 @path[10.0.0.2, 4421]: 19713 00:18:13.527 @path[10.0.0.2, 4421]: 19695 00:18:13.527 06:43:08 -- host/multipath.sh@69 -- # sed -n 1p 00:18:13.527 06:43:08 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:13.527 06:43:08 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:13.527 06:43:08 -- host/multipath.sh@69 -- # port=4421 00:18:13.527 06:43:08 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:13.527 06:43:08 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:13.527 06:43:08 -- host/multipath.sh@72 -- # kill 84784 00:18:13.527 06:43:08 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:13.527 06:43:08 -- host/multipath.sh@114 -- # killprocess 83973 00:18:13.527 06:43:08 -- common/autotest_common.sh@936 -- # '[' -z 83973 ']' 00:18:13.527 06:43:08 -- common/autotest_common.sh@940 -- # kill -0 83973 00:18:13.527 06:43:08 -- common/autotest_common.sh@941 -- # uname 00:18:13.527 06:43:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:13.527 06:43:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83973 00:18:13.527 killing process with pid 83973 00:18:13.527 06:43:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:13.527 06:43:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:13.527 06:43:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83973' 00:18:13.527 06:43:08 -- common/autotest_common.sh@955 -- # kill 83973 00:18:13.527 06:43:08 -- common/autotest_common.sh@960 -- # wait 83973 00:18:13.799 Connection closed with partial response: 00:18:13.799 00:18:13.799 00:18:13.799 06:43:09 -- host/multipath.sh@116 -- # wait 83973 00:18:13.799 06:43:09 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:13.799 [2024-12-05 06:42:11.556820] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:13.799 [2024-12-05 06:42:11.556942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83973 ] 00:18:13.799 [2024-12-05 06:42:11.691263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.799 [2024-12-05 06:42:11.724434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.799 Running I/O for 90 seconds... 00:18:13.799 [2024-12-05 06:42:21.649483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.799 [2024-12-05 06:42:21.649560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:13.799 [2024-12-05 06:42:21.649636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.799 [2024-12-05 06:42:21.649657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:13.799 [2024-12-05 06:42:21.649681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.799 [2024-12-05 06:42:21.649704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:13.799 [2024-12-05 06:42:21.649756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.799 [2024-12-05 06:42:21.649770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:13.799 [2024-12-05 06:42:21.649790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.799 [2024-12-05 06:42:21.649804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:13.799 [2024-12-05 06:42:21.649824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.799 [2024-12-05 06:42:21.649837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:13.799 [2024-12-05 06:42:21.649858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.800 [2024-12-05 06:42:21.649871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.649891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.800 [2024-12-05 06:42:21.649905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.649925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.800 [2024-12-05 06:42:21.649939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.649959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.800 [2024-12-05 06:42:21.649972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.649992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.650019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.800 [2024-12-05 06:42:21.650056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.650090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.800 [2024-12-05 06:42:21.650138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.800 [2024-12-05 06:42:21.650496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.800 [2024-12-05 06:42:21.650538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.650573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.650606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.650653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.650686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.650719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.650753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.650786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.650834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.650867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.650900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.650933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.650965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.650985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.650998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.651018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.800 [2024-12-05 06:42:21.651031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.651050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.651063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.651083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.651096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.651115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.800 [2024-12-05 06:42:21.651129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.651148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.651161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.651187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.651203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.651231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.651246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.651267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.800 [2024-12-05 06:42:21.651290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.651345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.800 [2024-12-05 06:42:21.651364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.651388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.800 [2024-12-05 06:42:21.651404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.651426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.800 [2024-12-05 06:42:21.651440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.651462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.651477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.651498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.651512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.651534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.651548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:13.800 [2024-12-05 06:42:21.651570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.800 [2024-12-05 06:42:21.651585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.651606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.651637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.651657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.651671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.651705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.651718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.651738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.651759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.651795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.651819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.651855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.651869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.651889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.651902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.651921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.651934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.651953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.651966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.651985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.651998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.652030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.652062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.652094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.652125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.652157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.652197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.652231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.652263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.652295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.652327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.652370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.652406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.652439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.652471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.652503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.652535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.652567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.652600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.652642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.652674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.652707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.652746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.652778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.652811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.652843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.801 [2024-12-05 06:42:21.652875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.652908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.801 [2024-12-05 06:42:21.652946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.801 [2024-12-05 06:42:21.652966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.652979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.652998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.802 [2024-12-05 06:42:21.653011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.802 [2024-12-05 06:42:21.653051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.802 [2024-12-05 06:42:21.653374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.802 [2024-12-05 06:42:21.653408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.802 [2024-12-05 06:42:21.653616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.802 [2024-12-05 06:42:21.653747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.802 [2024-12-05 06:42:21.653781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.802 [2024-12-05 06:42:21.653832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.653975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.653995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.654009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.654029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.654042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.654065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.654086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.654128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.654146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.654167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.654181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.654200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.654213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.654232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.654245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.654265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.654278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.654297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.802 [2024-12-05 06:42:21.654310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.654358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.654372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.654392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.802 [2024-12-05 06:42:21.654405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:13.802 [2024-12-05 06:42:21.654432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:21.654446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:21.654465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.803 [2024-12-05 06:42:21.654478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:21.654499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:21.654512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:21.654531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:21.654544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:21.654563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:21.654576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:21.654595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.803 [2024-12-05 06:42:21.654608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:21.654627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:21.654640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.233749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.803 [2024-12-05 06:42:28.233823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.233897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.233917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.233939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.803 [2024-12-05 06:42:28.233954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.233975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.233989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.803 [2024-12-05 06:42:28.234021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.803 [2024-12-05 06:42:28.234425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.803 [2024-12-05 06:42:28.234504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.803 [2024-12-05 06:42:28.234536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.803 [2024-12-05 06:42:28.234667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:13.803 [2024-12-05 06:42:28.234950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.803 [2024-12-05 06:42:28.234963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.234982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.234995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.804 [2024-12-05 06:42:28.235030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.804 [2024-12-05 06:42:28.235105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.804 [2024-12-05 06:42:28.235173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.804 [2024-12-05 06:42:28.235270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.235952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.235971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.804 [2024-12-05 06:42:28.235985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.236004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.804 [2024-12-05 06:42:28.236017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.236036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.804 [2024-12-05 06:42:28.236049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.236068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.804 [2024-12-05 06:42:28.236081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.236100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.804 [2024-12-05 06:42:28.236113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.236132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.804 [2024-12-05 06:42:28.236145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.236164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.804 [2024-12-05 06:42:28.236177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.236196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.804 [2024-12-05 06:42:28.236209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:13.804 [2024-12-05 06:42:28.236228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.236241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.236273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.236312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.236361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.236409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.236443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.236476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.236510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.236543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.236576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.236609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.236642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.236675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.236708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.236763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.236797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.236829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.236878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.236911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.236944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.236980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.236998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.237034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.237069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.237102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.237135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.237168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.237247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.237281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.237314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.237347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.237396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.237433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.237466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.237500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.805 [2024-12-05 06:42:28.237533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.237566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.237599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.237619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.237632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.238539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.805 [2024-12-05 06:42:28.238566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:13.805 [2024-12-05 06:42:28.238610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:28.238628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.238656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:28.238671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.238698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:28.238712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.238742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:28.238757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.238784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:28.238798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.238824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:28.238838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.238865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:28.238878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.238905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:28.238919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.238946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:28.238959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.238986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:28.239000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.239027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:28.239041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.239067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:28.239081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.239116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:28.239131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.239158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:28.239171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.239198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:28.239212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.239239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:28.239253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.239337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:28.239371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.239404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:28.239420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.239451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:28.239466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.239499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:28.239515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.239545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:28.239560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.239590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:28.239605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.239649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:28.239663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:28.239705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:28.239719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.281544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:35.281642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.281716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:35.281737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.281759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:35.281774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.281794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:35.281808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.281828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:35.281842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.281862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:35.281876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.281895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:35.281909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.281929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:35.281942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.281963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.806 [2024-12-05 06:42:35.281976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.281997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:35.282010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.282045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:35.282058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.282078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:35.282091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.282110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:35.282134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.282155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:35.282176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.282195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:35.282209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:13.806 [2024-12-05 06:42:35.282228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.806 [2024-12-05 06:42:35.282242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.282277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.807 [2024-12-05 06:42:35.282312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.282359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.807 [2024-12-05 06:42:35.282398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.282431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.807 [2024-12-05 06:42:35.282465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.282499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.807 [2024-12-05 06:42:35.282538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.282571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.282614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.282647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.807 [2024-12-05 06:42:35.282680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.807 [2024-12-05 06:42:35.282713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.807 [2024-12-05 06:42:35.282746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.807 [2024-12-05 06:42:35.282779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.807 [2024-12-05 06:42:35.282812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.282845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.807 [2024-12-05 06:42:35.282878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.807 [2024-12-05 06:42:35.282911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.282945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.282964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.282978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.283018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.283052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.283085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.283118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.283151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.283184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.283218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.807 [2024-12-05 06:42:35.283270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.283349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.807 [2024-12-05 06:42:35.283389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.283426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.807 [2024-12-05 06:42:35.283463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.283508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.283546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.283583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.283634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.807 [2024-12-05 06:42:35.283684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:13.807 [2024-12-05 06:42:35.283719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.807 [2024-12-05 06:42:35.283733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.283752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.808 [2024-12-05 06:42:35.283765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.283785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.283798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.283818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.808 [2024-12-05 06:42:35.283831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.283851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.283865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.283885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.808 [2024-12-05 06:42:35.283898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.283918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.808 [2024-12-05 06:42:35.283931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.283950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.283964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.283991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.808 [2024-12-05 06:42:35.284450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.808 [2024-12-05 06:42:35.284497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.808 [2024-12-05 06:42:35.284533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.808 [2024-12-05 06:42:35.284568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.808 [2024-12-05 06:42:35.284604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.284976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:13.808 [2024-12-05 06:42:35.284997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.808 [2024-12-05 06:42:35.285011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.809 [2024-12-05 06:42:35.285120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.809 [2024-12-05 06:42:35.285223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.809 [2024-12-05 06:42:35.285324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.809 [2024-12-05 06:42:35.285400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.809 [2024-12-05 06:42:35.285473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.809 [2024-12-05 06:42:35.285508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.809 [2024-12-05 06:42:35.285702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.809 [2024-12-05 06:42:35.285753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.285966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.285987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.286001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.286022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.286051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.286834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.286862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.286898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.809 [2024-12-05 06:42:35.286915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.286944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.809 [2024-12-05 06:42:35.286959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.286988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.287003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.287031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.287046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.287074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.287088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.287117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.809 [2024-12-05 06:42:35.287131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.287160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.287175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.287203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.287228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.287259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.809 [2024-12-05 06:42:35.287274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.287344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.287364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.287396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.287412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:13.809 [2024-12-05 06:42:35.287443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.809 [2024-12-05 06:42:35.287459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.810 [2024-12-05 06:42:48.697875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.810 [2024-12-05 06:42:48.697911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.810 [2024-12-05 06:42:48.697941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.697969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.697984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.810 [2024-12-05 06:42:48.698013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.698041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.698069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.698096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.698124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.698151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.698179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.698206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.698234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.810 [2024-12-05 06:42:48.698262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.810 [2024-12-05 06:42:48.698297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.810 [2024-12-05 06:42:48.698325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.810 [2024-12-05 06:42:48.698396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.810 [2024-12-05 06:42:48.698424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.810 [2024-12-05 06:42:48.698451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.810 [2024-12-05 06:42:48.698486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.810 [2024-12-05 06:42:48.698500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.811 [2024-12-05 06:42:48.698512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.698527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.698556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.698587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.698601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.698616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.698630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.698645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.698659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.698674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.811 [2024-12-05 06:42:48.698687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.698703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.698723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.698739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.811 [2024-12-05 06:42:48.698752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.698767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.698781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.698796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.698810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.698825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.698838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.698853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.698867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.698882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.698895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.698910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.698924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.698940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.698953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.698968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.698989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.811 [2024-12-05 06:42:48.699018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.699046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.811 [2024-12-05 06:42:48.699074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.811 [2024-12-05 06:42:48.699110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.699138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.699198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.699228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.811 [2024-12-05 06:42:48.699256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.811 [2024-12-05 06:42:48.699295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.699339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.699369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.699397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.699426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.699456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.699485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.699513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.699562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.699590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.811 [2024-12-05 06:42:48.699618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.699647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.811 [2024-12-05 06:42:48.699675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.811 [2024-12-05 06:42:48.699704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.811 [2024-12-05 06:42:48.699733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.811 [2024-12-05 06:42:48.699761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.811 [2024-12-05 06:42:48.699777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.699790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.699805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.699818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.699833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.812 [2024-12-05 06:42:48.699847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.699862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.699875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.699891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.699910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.699926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.699939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.699954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.699968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.699983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.812 [2024-12-05 06:42:48.699997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.812 [2024-12-05 06:42:48.700025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.812 [2024-12-05 06:42:48.700111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.812 [2024-12-05 06:42:48.700542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.812 [2024-12-05 06:42:48.700599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lb 06:43:09 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.812 a:16720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.812 [2024-12-05 06:42:48.700812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.812 [2024-12-05 06:42:48.700840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.812 [2024-12-05 06:42:48.700898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.812 [2024-12-05 06:42:48.700926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.812 [2024-12-05 06:42:48.700941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.812 [2024-12-05 06:42:48.700955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.813 [2024-12-05 06:42:48.700970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.813 [2024-12-05 06:42:48.700983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.813 [2024-12-05 06:42:48.700998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.813 [2024-12-05 06:42:48.701012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.813 [2024-12-05 06:42:48.701033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.813 [2024-12-05 06:42:48.701047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.813 [2024-12-05 06:42:48.701062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.813 [2024-12-05 06:42:48.701076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.813 [2024-12-05 06:42:48.701091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.813 [2024-12-05 06:42:48.701105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.813 [2024-12-05 06:42:48.701120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.813 [2024-12-05 06:42:48.701135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.813 [2024-12-05 06:42:48.701151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.813 [2024-12-05 06:42:48.701166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.813 [2024-12-05 06:42:48.701181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.813 [2024-12-05 06:42:48.701195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.813 [2024-12-05 06:42:48.701209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf100 is same with the state(5) to be set 00:18:13.813 [2024-12-05 06:42:48.701226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.813 [2024-12-05 06:42:48.701236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.813 [2024-12-05 06:42:48.701248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:8 PRP1 0x0 PRP2 0x0 00:18:13.813 [2024-12-05 06:42:48.701261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.813 [2024-12-05 06:42:48.701307] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1baf100 was disconnected and freed. reset controller. 00:18:13.813 [2024-12-05 06:42:48.701431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.813 [2024-12-05 06:42:48.701458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.813 [2024-12-05 06:42:48.701474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.813 [2024-12-05 06:42:48.701488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.813 [2024-12-05 06:42:48.701502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.813 [2024-12-05 06:42:48.701516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.813 [2024-12-05 06:42:48.701529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.813 [2024-12-05 06:42:48.701543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.813 [2024-12-05 06:42:48.701556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbe3c0 is same with the state(5) to be set 00:18:13.813 [2024-12-05 06:42:48.702655] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:13.813 [2024-12-05 06:42:48.702695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbe3c0 (9): Bad file descriptor 00:18:13.813 [2024-12-05 06:42:48.702989] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:13.813 [2024-12-05 06:42:48.703101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:13.813 [2024-12-05 06:42:48.703154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:13.813 [2024-12-05 06:42:48.703177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bbe3c0 with addr=10.0.0.2, port=4421 00:18:13.813 [2024-12-05 06:42:48.703193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbe3c0 is same with the state(5) to be set 00:18:13.813 [2024-12-05 06:42:48.703227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbe3c0 (9): Bad file descriptor 00:18:13.813 [2024-12-05 06:42:48.703259] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:13.813 [2024-12-05 06:42:48.703276] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:13.813 [2024-12-05 06:42:48.703303] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:13.813 [2024-12-05 06:42:48.703351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:13.813 [2024-12-05 06:42:48.703373] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:13.813 [2024-12-05 06:42:58.745760] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:13.813 Received shutdown signal, test time was about 55.468954 seconds 00:18:13.813 00:18:13.813 Latency(us) 00:18:13.813 [2024-12-05T06:43:09.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.813 [2024-12-05T06:43:09.279Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:13.813 Verification LBA range: start 0x0 length 0x4000 00:18:13.813 Nvme0n1 : 55.47 11286.01 44.09 0.00 0.00 11322.22 117.29 7046430.72 00:18:13.813 [2024-12-05T06:43:09.279Z] =================================================================================================================== 00:18:13.813 [2024-12-05T06:43:09.279Z] Total : 11286.01 44.09 0.00 0.00 11322.22 117.29 7046430.72 00:18:14.072 06:43:09 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:14.072 06:43:09 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:14.072 06:43:09 -- host/multipath.sh@125 -- # nvmftestfini 00:18:14.072 06:43:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:14.072 06:43:09 -- nvmf/common.sh@116 -- # sync 00:18:14.072 06:43:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:14.072 06:43:09 -- nvmf/common.sh@119 -- # set +e 00:18:14.072 06:43:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:14.072 06:43:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:14.072 rmmod nvme_tcp 00:18:14.072 rmmod nvme_fabrics 00:18:14.072 rmmod nvme_keyring 00:18:14.072 06:43:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:14.072 06:43:09 -- nvmf/common.sh@123 -- # set -e 00:18:14.072 06:43:09 -- nvmf/common.sh@124 -- # return 0 00:18:14.072 06:43:09 -- nvmf/common.sh@477 -- # '[' -n 83922 ']' 00:18:14.072 06:43:09 -- nvmf/common.sh@478 -- # killprocess 83922 00:18:14.072 06:43:09 -- common/autotest_common.sh@936 -- # '[' -z 83922 ']' 00:18:14.072 06:43:09 -- common/autotest_common.sh@940 -- # kill -0 83922 00:18:14.072 06:43:09 -- common/autotest_common.sh@941 -- # uname 00:18:14.072 06:43:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:14.072 06:43:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83922 00:18:14.072 06:43:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:14.072 06:43:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:14.072 killing process with pid 83922 00:18:14.072 06:43:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83922' 00:18:14.072 06:43:09 -- common/autotest_common.sh@955 -- # kill 83922 00:18:14.072 06:43:09 -- common/autotest_common.sh@960 -- # wait 83922 00:18:14.331 06:43:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:14.331 06:43:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:14.331 06:43:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:14.331 06:43:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.331 06:43:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:14.331 06:43:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.331 06:43:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.331 06:43:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.331 06:43:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:14.331 00:18:14.331 real 1m0.596s 00:18:14.331 user 2m48.656s 00:18:14.331 sys 0m18.050s 00:18:14.331 06:43:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:14.331 ************************************ 00:18:14.331 END TEST nvmf_multipath 00:18:14.331 ************************************ 00:18:14.331 06:43:09 -- common/autotest_common.sh@10 -- # set +x 00:18:14.331 06:43:09 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:14.331 06:43:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:14.331 06:43:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:14.331 06:43:09 -- common/autotest_common.sh@10 -- # set +x 00:18:14.331 ************************************ 00:18:14.331 START TEST nvmf_timeout 00:18:14.331 ************************************ 00:18:14.331 06:43:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:14.331 * Looking for test storage... 00:18:14.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:14.331 06:43:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:14.331 06:43:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:14.331 06:43:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:14.591 06:43:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:14.591 06:43:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:14.591 06:43:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:14.591 06:43:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:14.591 06:43:09 -- scripts/common.sh@335 -- # IFS=.-: 00:18:14.591 06:43:09 -- scripts/common.sh@335 -- # read -ra ver1 00:18:14.591 06:43:09 -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.591 06:43:09 -- scripts/common.sh@336 -- # read -ra ver2 00:18:14.591 06:43:09 -- scripts/common.sh@337 -- # local 'op=<' 00:18:14.591 06:43:09 -- scripts/common.sh@339 -- # ver1_l=2 00:18:14.591 06:43:09 -- scripts/common.sh@340 -- # ver2_l=1 00:18:14.591 06:43:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:14.591 06:43:09 -- scripts/common.sh@343 -- # case "$op" in 00:18:14.591 06:43:09 -- scripts/common.sh@344 -- # : 1 00:18:14.591 06:43:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:14.591 06:43:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.591 06:43:09 -- scripts/common.sh@364 -- # decimal 1 00:18:14.591 06:43:09 -- scripts/common.sh@352 -- # local d=1 00:18:14.591 06:43:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.591 06:43:09 -- scripts/common.sh@354 -- # echo 1 00:18:14.591 06:43:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:14.591 06:43:09 -- scripts/common.sh@365 -- # decimal 2 00:18:14.591 06:43:09 -- scripts/common.sh@352 -- # local d=2 00:18:14.591 06:43:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.591 06:43:09 -- scripts/common.sh@354 -- # echo 2 00:18:14.591 06:43:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:14.591 06:43:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:14.591 06:43:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:14.591 06:43:09 -- scripts/common.sh@367 -- # return 0 00:18:14.591 06:43:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.591 06:43:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:14.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.591 --rc genhtml_branch_coverage=1 00:18:14.591 --rc genhtml_function_coverage=1 00:18:14.591 --rc genhtml_legend=1 00:18:14.591 --rc geninfo_all_blocks=1 00:18:14.591 --rc geninfo_unexecuted_blocks=1 00:18:14.591 00:18:14.591 ' 00:18:14.591 06:43:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:14.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.591 --rc genhtml_branch_coverage=1 00:18:14.591 --rc genhtml_function_coverage=1 00:18:14.591 --rc genhtml_legend=1 00:18:14.591 --rc geninfo_all_blocks=1 00:18:14.591 --rc geninfo_unexecuted_blocks=1 00:18:14.591 00:18:14.591 ' 00:18:14.591 06:43:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:14.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.591 --rc genhtml_branch_coverage=1 00:18:14.591 --rc genhtml_function_coverage=1 00:18:14.591 --rc genhtml_legend=1 00:18:14.591 --rc geninfo_all_blocks=1 00:18:14.591 --rc geninfo_unexecuted_blocks=1 00:18:14.591 00:18:14.591 ' 00:18:14.591 06:43:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:14.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.591 --rc genhtml_branch_coverage=1 00:18:14.591 --rc genhtml_function_coverage=1 00:18:14.591 --rc genhtml_legend=1 00:18:14.591 --rc geninfo_all_blocks=1 00:18:14.591 --rc geninfo_unexecuted_blocks=1 00:18:14.591 00:18:14.592 ' 00:18:14.592 06:43:09 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.592 06:43:09 -- nvmf/common.sh@7 -- # uname -s 00:18:14.592 06:43:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.592 06:43:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.592 06:43:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.592 06:43:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.592 06:43:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.592 06:43:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.592 06:43:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.592 06:43:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.592 06:43:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.592 06:43:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.592 06:43:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:18:14.592 06:43:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:18:14.592 06:43:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.592 06:43:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.592 06:43:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.592 06:43:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.592 06:43:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.592 06:43:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.592 06:43:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.592 06:43:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.592 06:43:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.592 06:43:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.592 06:43:09 -- paths/export.sh@5 -- # export PATH 00:18:14.592 06:43:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.592 06:43:09 -- nvmf/common.sh@46 -- # : 0 00:18:14.592 06:43:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:14.592 06:43:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:14.592 06:43:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:14.592 06:43:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.592 06:43:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.592 06:43:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:14.592 06:43:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:14.592 06:43:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:14.592 06:43:09 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.592 06:43:09 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.592 06:43:09 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.592 06:43:09 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:14.592 06:43:09 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.592 06:43:09 -- host/timeout.sh@19 -- # nvmftestinit 00:18:14.592 06:43:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:14.592 06:43:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.592 06:43:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:14.592 06:43:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:14.592 06:43:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:14.592 06:43:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.592 06:43:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.592 06:43:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.592 06:43:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:14.592 06:43:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:14.592 06:43:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:14.592 06:43:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:14.592 06:43:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:14.592 06:43:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:14.592 06:43:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.592 06:43:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.592 06:43:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:14.592 06:43:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:14.592 06:43:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.592 06:43:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.592 06:43:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.592 06:43:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.592 06:43:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.592 06:43:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.592 06:43:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.592 06:43:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.592 06:43:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:14.592 06:43:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:14.592 Cannot find device "nvmf_tgt_br" 00:18:14.592 06:43:09 -- nvmf/common.sh@154 -- # true 00:18:14.592 06:43:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.592 Cannot find device "nvmf_tgt_br2" 00:18:14.592 06:43:09 -- nvmf/common.sh@155 -- # true 00:18:14.592 06:43:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:14.592 06:43:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:14.592 Cannot find device "nvmf_tgt_br" 00:18:14.592 06:43:09 -- nvmf/common.sh@157 -- # true 00:18:14.592 06:43:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:14.592 Cannot find device "nvmf_tgt_br2" 00:18:14.592 06:43:09 -- nvmf/common.sh@158 -- # true 00:18:14.592 06:43:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:14.592 06:43:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:14.592 06:43:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.592 06:43:10 -- nvmf/common.sh@161 -- # true 00:18:14.592 06:43:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.592 06:43:10 -- nvmf/common.sh@162 -- # true 00:18:14.592 06:43:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:14.592 06:43:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:14.592 06:43:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:14.592 06:43:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:14.852 06:43:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:14.852 06:43:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:14.852 06:43:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:14.852 06:43:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:14.852 06:43:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:14.852 06:43:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:14.852 06:43:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:14.852 06:43:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:14.852 06:43:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:14.852 06:43:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:14.852 06:43:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:14.852 06:43:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:14.852 06:43:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:14.852 06:43:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:14.852 06:43:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.852 06:43:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.852 06:43:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.852 06:43:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.852 06:43:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.852 06:43:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:14.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:18:14.852 00:18:14.852 --- 10.0.0.2 ping statistics --- 00:18:14.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.852 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:18:14.852 06:43:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:14.852 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:14.852 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:18:14.852 00:18:14.852 --- 10.0.0.3 ping statistics --- 00:18:14.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.852 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:14.852 06:43:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:14.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:18:14.852 00:18:14.852 --- 10.0.0.1 ping statistics --- 00:18:14.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.852 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:14.852 06:43:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.852 06:43:10 -- nvmf/common.sh@421 -- # return 0 00:18:14.852 06:43:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:14.852 06:43:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.852 06:43:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:14.852 06:43:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:14.852 06:43:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.852 06:43:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:14.852 06:43:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:14.852 06:43:10 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:14.852 06:43:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:14.852 06:43:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:14.852 06:43:10 -- common/autotest_common.sh@10 -- # set +x 00:18:14.852 06:43:10 -- nvmf/common.sh@469 -- # nvmfpid=85101 00:18:14.853 06:43:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:14.853 06:43:10 -- nvmf/common.sh@470 -- # waitforlisten 85101 00:18:14.853 06:43:10 -- common/autotest_common.sh@829 -- # '[' -z 85101 ']' 00:18:14.853 06:43:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.853 06:43:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.853 06:43:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.853 06:43:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.853 06:43:10 -- common/autotest_common.sh@10 -- # set +x 00:18:14.853 [2024-12-05 06:43:10.273378] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:14.853 [2024-12-05 06:43:10.273478] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.111 [2024-12-05 06:43:10.405305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:15.111 [2024-12-05 06:43:10.438063] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:15.111 [2024-12-05 06:43:10.438219] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.111 [2024-12-05 06:43:10.438232] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.111 [2024-12-05 06:43:10.438240] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.111 [2024-12-05 06:43:10.438393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.111 [2024-12-05 06:43:10.438793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.111 06:43:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.111 06:43:10 -- common/autotest_common.sh@862 -- # return 0 00:18:15.111 06:43:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:15.111 06:43:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:15.111 06:43:10 -- common/autotest_common.sh@10 -- # set +x 00:18:15.111 06:43:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.111 06:43:10 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:15.111 06:43:10 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:15.679 [2024-12-05 06:43:10.847571] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.679 06:43:10 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:15.938 Malloc0 00:18:15.938 06:43:11 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:16.197 06:43:11 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:16.455 06:43:11 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.455 [2024-12-05 06:43:11.913444] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.713 06:43:11 -- host/timeout.sh@32 -- # bdevperf_pid=85148 00:18:16.713 06:43:11 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:16.713 06:43:11 -- host/timeout.sh@34 -- # waitforlisten 85148 /var/tmp/bdevperf.sock 00:18:16.713 06:43:11 -- common/autotest_common.sh@829 -- # '[' -z 85148 ']' 00:18:16.713 06:43:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.713 06:43:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.713 06:43:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.713 06:43:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.713 06:43:11 -- common/autotest_common.sh@10 -- # set +x 00:18:16.713 [2024-12-05 06:43:11.968792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:16.713 [2024-12-05 06:43:11.968869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85148 ] 00:18:16.713 [2024-12-05 06:43:12.101138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.713 [2024-12-05 06:43:12.134295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.971 06:43:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.971 06:43:12 -- common/autotest_common.sh@862 -- # return 0 00:18:16.971 06:43:12 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:16.971 06:43:12 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:17.538 NVMe0n1 00:18:17.538 06:43:12 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:17.538 06:43:12 -- host/timeout.sh@51 -- # rpc_pid=85153 00:18:17.538 06:43:12 -- host/timeout.sh@53 -- # sleep 1 00:18:17.538 Running I/O for 10 seconds... 00:18:18.475 06:43:13 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.737 [2024-12-05 06:43:13.968636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbaa60 is same with the state(5) to be set 00:18:18.737 [2024-12-05 06:43:13.968681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbaa60 is same with the state(5) to be set 00:18:18.737 [2024-12-05 06:43:13.968692] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbaa60 is same with the state(5) to be set 00:18:18.737 [2024-12-05 06:43:13.968701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbaa60 is same with the state(5) to be set 00:18:18.737 [2024-12-05 06:43:13.968709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbaa60 is same with the state(5) to be set 00:18:18.737 [2024-12-05 06:43:13.968718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbaa60 is same with the state(5) to be set 00:18:18.737 [2024-12-05 06:43:13.968727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbaa60 is same with the state(5) to be set 00:18:18.737 [2024-12-05 06:43:13.968960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.737 [2024-12-05 06:43:13.969034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.737 [2024-12-05 06:43:13.969159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.737 [2024-12-05 06:43:13.969178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.737 [2024-12-05 06:43:13.969196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b610 is same with the state(5) to be set 00:18:18.737 [2024-12-05 06:43:13.969273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.737 [2024-12-05 06:43:13.969527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.737 [2024-12-05 06:43:13.969722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.737 [2024-12-05 06:43:13.969807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.737 [2024-12-05 06:43:13.969913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.737 [2024-12-05 06:43:13.969965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.969975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.737 [2024-12-05 06:43:13.969992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.970002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.737 [2024-12-05 06:43:13.970010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.737 [2024-12-05 06:43:13.970020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.738 [2024-12-05 06:43:13.970890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.738 [2024-12-05 06:43:13.970926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.738 [2024-12-05 06:43:13.970936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.739 [2024-12-05 06:43:13.970944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.970953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.970961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.970971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.739 [2024-12-05 06:43:13.970980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.970991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.739 [2024-12-05 06:43:13.971037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.739 [2024-12-05 06:43:13.971265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.739 [2024-12-05 06:43:13.971341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.739 [2024-12-05 06:43:13.971406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.739 [2024-12-05 06:43:13.971436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.739 [2024-12-05 06:43:13.971486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.739 [2024-12-05 06:43:13.971507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.739 [2024-12-05 06:43:13.971704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.739 [2024-12-05 06:43:13.971815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.739 [2024-12-05 06:43:13.971841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.740 [2024-12-05 06:43:13.971850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.971860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.740 [2024-12-05 06:43:13.971871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.971881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.740 [2024-12-05 06:43:13.971890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.971900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.740 [2024-12-05 06:43:13.971909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.971919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.740 [2024-12-05 06:43:13.971928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.971938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.740 [2024-12-05 06:43:13.971947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.971957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.740 [2024-12-05 06:43:13.971965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.971976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.740 [2024-12-05 06:43:13.971984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.971995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.740 [2024-12-05 06:43:13.972003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.972013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.740 [2024-12-05 06:43:13.972022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.972032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.740 [2024-12-05 06:43:13.972041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.972051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.740 [2024-12-05 06:43:13.972059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.972070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.740 [2024-12-05 06:43:13.972078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.972090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.740 [2024-12-05 06:43:13.972100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.972111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.740 [2024-12-05 06:43:13.972119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.972130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.740 [2024-12-05 06:43:13.972139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.972149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.740 [2024-12-05 06:43:13.972157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.972168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.740 [2024-12-05 06:43:13.972178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.972187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f969a0 is same with the state(5) to be set 00:18:18.740 [2024-12-05 06:43:13.972205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.740 [2024-12-05 06:43:13.972212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.740 [2024-12-05 06:43:13.972220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:968 len:8 PRP1 0x0 PRP2 0x0 00:18:18.740 [2024-12-05 06:43:13.972228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.740 [2024-12-05 06:43:13.972266] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f969a0 was disconnected and freed. reset controller. 00:18:18.740 [2024-12-05 06:43:13.972569] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:18.740 [2024-12-05 06:43:13.972594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9b610 (9): Bad file descriptor 00:18:18.740 [2024-12-05 06:43:13.972689] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.740 [2024-12-05 06:43:13.972751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.740 [2024-12-05 06:43:13.972821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.740 [2024-12-05 06:43:13.972836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9b610 with addr=10.0.0.2, port=4420 00:18:18.740 [2024-12-05 06:43:13.972846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b610 is same with the state(5) to be set 00:18:18.740 [2024-12-05 06:43:13.972864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9b610 (9): Bad file descriptor 00:18:18.740 [2024-12-05 06:43:13.972879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:18.740 [2024-12-05 06:43:13.972888] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:18.740 [2024-12-05 06:43:13.972897] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:18.740 [2024-12-05 06:43:13.972915] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:18.740 [2024-12-05 06:43:13.972925] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:18.740 06:43:13 -- host/timeout.sh@56 -- # sleep 2 00:18:20.760 [2024-12-05 06:43:15.973163] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:20.760 [2024-12-05 06:43:15.973275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:20.760 [2024-12-05 06:43:15.973318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:20.760 [2024-12-05 06:43:15.973346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9b610 with addr=10.0.0.2, port=4420 00:18:20.760 [2024-12-05 06:43:15.973361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b610 is same with the state(5) to be set 00:18:20.760 [2024-12-05 06:43:15.973386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9b610 (9): Bad file descriptor 00:18:20.760 [2024-12-05 06:43:15.973433] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:20.760 [2024-12-05 06:43:15.973443] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:20.760 [2024-12-05 06:43:15.973453] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:20.760 [2024-12-05 06:43:15.973481] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:20.760 [2024-12-05 06:43:15.973492] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:20.760 06:43:15 -- host/timeout.sh@57 -- # get_controller 00:18:20.760 06:43:16 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:20.760 06:43:16 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:21.037 06:43:16 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:21.037 06:43:16 -- host/timeout.sh@58 -- # get_bdev 00:18:21.037 06:43:16 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:21.037 06:43:16 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:21.037 06:43:16 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:21.037 06:43:16 -- host/timeout.sh@61 -- # sleep 5 00:18:22.941 [2024-12-05 06:43:17.973595] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:22.941 [2024-12-05 06:43:17.973693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:22.941 [2024-12-05 06:43:17.973733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:22.941 [2024-12-05 06:43:17.973748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9b610 with addr=10.0.0.2, port=4420 00:18:22.941 [2024-12-05 06:43:17.973760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9b610 is same with the state(5) to be set 00:18:22.941 [2024-12-05 06:43:17.973783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9b610 (9): Bad file descriptor 00:18:22.941 [2024-12-05 06:43:17.973801] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:22.941 [2024-12-05 06:43:17.973810] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:22.941 [2024-12-05 06:43:17.973819] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:22.941 [2024-12-05 06:43:17.973844] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:22.941 [2024-12-05 06:43:17.973854] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:24.845 [2024-12-05 06:43:19.973877] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:24.845 [2024-12-05 06:43:19.973923] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:24.845 [2024-12-05 06:43:19.973949] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:24.845 [2024-12-05 06:43:19.973975] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:24.845 [2024-12-05 06:43:19.974001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:25.782 00:18:25.782 Latency(us) 00:18:25.782 [2024-12-05T06:43:21.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.782 [2024-12-05T06:43:21.248Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:25.782 Verification LBA range: start 0x0 length 0x4000 00:18:25.782 NVMe0n1 : 8.17 2012.88 7.86 15.66 0.00 63007.27 3157.64 7015926.69 00:18:25.782 [2024-12-05T06:43:21.248Z] =================================================================================================================== 00:18:25.782 [2024-12-05T06:43:21.248Z] Total : 2012.88 7.86 15.66 0.00 63007.27 3157.64 7015926.69 00:18:25.782 0 00:18:26.040 06:43:21 -- host/timeout.sh@62 -- # get_controller 00:18:26.040 06:43:21 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:26.040 06:43:21 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:26.606 06:43:21 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:26.606 06:43:21 -- host/timeout.sh@63 -- # get_bdev 00:18:26.606 06:43:21 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:26.606 06:43:21 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:26.606 06:43:22 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:26.606 06:43:22 -- host/timeout.sh@65 -- # wait 85153 00:18:26.606 06:43:22 -- host/timeout.sh@67 -- # killprocess 85148 00:18:26.606 06:43:22 -- common/autotest_common.sh@936 -- # '[' -z 85148 ']' 00:18:26.606 06:43:22 -- common/autotest_common.sh@940 -- # kill -0 85148 00:18:26.606 06:43:22 -- common/autotest_common.sh@941 -- # uname 00:18:26.606 06:43:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:26.606 06:43:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85148 00:18:26.606 06:43:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:26.606 06:43:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:26.606 killing process with pid 85148 00:18:26.606 06:43:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85148' 00:18:26.606 Received shutdown signal, test time was about 9.257120 seconds 00:18:26.606 00:18:26.606 Latency(us) 00:18:26.606 [2024-12-05T06:43:22.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.606 [2024-12-05T06:43:22.072Z] =================================================================================================================== 00:18:26.606 [2024-12-05T06:43:22.072Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.606 06:43:22 -- common/autotest_common.sh@955 -- # kill 85148 00:18:26.606 06:43:22 -- common/autotest_common.sh@960 -- # wait 85148 00:18:26.865 06:43:22 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.123 [2024-12-05 06:43:22.395522] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.123 06:43:22 -- host/timeout.sh@74 -- # bdevperf_pid=85281 00:18:27.123 06:43:22 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:27.123 06:43:22 -- host/timeout.sh@76 -- # waitforlisten 85281 /var/tmp/bdevperf.sock 00:18:27.123 06:43:22 -- common/autotest_common.sh@829 -- # '[' -z 85281 ']' 00:18:27.123 06:43:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.123 06:43:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.123 06:43:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.123 06:43:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.123 06:43:22 -- common/autotest_common.sh@10 -- # set +x 00:18:27.123 [2024-12-05 06:43:22.454822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:27.123 [2024-12-05 06:43:22.454910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85281 ] 00:18:27.123 [2024-12-05 06:43:22.584273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.381 [2024-12-05 06:43:22.618957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.317 06:43:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.317 06:43:23 -- common/autotest_common.sh@862 -- # return 0 00:18:28.317 06:43:23 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:28.317 06:43:23 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:28.884 NVMe0n1 00:18:28.884 06:43:24 -- host/timeout.sh@84 -- # rpc_pid=85299 00:18:28.885 06:43:24 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:28.885 06:43:24 -- host/timeout.sh@86 -- # sleep 1 00:18:28.885 Running I/O for 10 seconds... 00:18:29.832 06:43:25 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:30.097 [2024-12-05 06:43:25.316736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316830] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316838] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316877] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316901] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316967] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.316998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.317005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.317012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.317020] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.317028] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.317035] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.317042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.317050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.317073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.317096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.317104] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.317128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.097 [2024-12-05 06:43:25.317136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317169] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317194] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317210] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317227] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317235] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317259] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317275] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317316] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317323] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dba1b0 is same with the state(5) to be set 00:18:30.098 [2024-12-05 06:43:25.317381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.317984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.317994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.098 [2024-12-05 06:43:25.318005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.098 [2024-12-05 06:43:25.318014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.099 [2024-12-05 06:43:25.318281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.099 [2024-12-05 06:43:25.318301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.099 [2024-12-05 06:43:25.318355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.099 [2024-12-05 06:43:25.318376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.099 [2024-12-05 06:43:25.318397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.099 [2024-12-05 06:43:25.318479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.099 [2024-12-05 06:43:25.318601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.099 [2024-12-05 06:43:25.318642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.099 [2024-12-05 06:43:25.318703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.099 [2024-12-05 06:43:25.318827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.099 [2024-12-05 06:43:25.318838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.318847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.318859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.318868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.318879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.318888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.318899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.318909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.318920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.318929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.318940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.318950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.318961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.318970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.318981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.318990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.319010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.319031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.319071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.319093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.319195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.319466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.319532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.319576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.319596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.319616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.100 [2024-12-05 06:43:25.319636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.319657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.100 [2024-12-05 06:43:25.319677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.100 [2024-12-05 06:43:25.319688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.101 [2024-12-05 06:43:25.319698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.319709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.101 [2024-12-05 06:43:25.319718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.319729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.101 [2024-12-05 06:43:25.319739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.319750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.101 [2024-12-05 06:43:25.319759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.319770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.101 [2024-12-05 06:43:25.319779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.319790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.101 [2024-12-05 06:43:25.319799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.319812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.101 [2024-12-05 06:43:25.319822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.319833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.101 [2024-12-05 06:43:25.319842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.319854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.101 [2024-12-05 06:43:25.319863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.319874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.101 [2024-12-05 06:43:25.319885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.319897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.101 [2024-12-05 06:43:25.319906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.319917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.101 [2024-12-05 06:43:25.319927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.319938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.101 [2024-12-05 06:43:25.319947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.319958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.101 [2024-12-05 06:43:25.319967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.319978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.101 [2024-12-05 06:43:25.319988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.319999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.101 [2024-12-05 06:43:25.320009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.320020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.101 [2024-12-05 06:43:25.320029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.320040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.101 [2024-12-05 06:43:25.320050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.320060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.101 [2024-12-05 06:43:25.320069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.320080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1394870 is same with the state(5) to be set 00:18:30.101 [2024-12-05 06:43:25.320093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.101 [2024-12-05 06:43:25.320100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.101 [2024-12-05 06:43:25.320109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127336 len:8 PRP1 0x0 PRP2 0x0 00:18:30.101 [2024-12-05 06:43:25.320118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.101 [2024-12-05 06:43:25.320158] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1394870 was disconnected and freed. reset controller. 00:18:30.101 [2024-12-05 06:43:25.320424] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:30.101 [2024-12-05 06:43:25.320507] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1399450 (9): Bad file descriptor 00:18:30.101 [2024-12-05 06:43:25.320603] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:30.101 [2024-12-05 06:43:25.320664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:30.101 [2024-12-05 06:43:25.320705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:30.101 [2024-12-05 06:43:25.320721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1399450 with addr=10.0.0.2, port=4420 00:18:30.101 [2024-12-05 06:43:25.320732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399450 is same with the state(5) to be set 00:18:30.101 [2024-12-05 06:43:25.320753] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1399450 (9): Bad file descriptor 00:18:30.101 [2024-12-05 06:43:25.320770] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:30.101 [2024-12-05 06:43:25.320779] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:30.101 [2024-12-05 06:43:25.320789] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:30.101 [2024-12-05 06:43:25.320809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:30.101 [2024-12-05 06:43:25.320819] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:30.101 06:43:25 -- host/timeout.sh@90 -- # sleep 1 00:18:31.033 [2024-12-05 06:43:26.320920] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.033 [2024-12-05 06:43:26.321017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.033 [2024-12-05 06:43:26.321058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.033 [2024-12-05 06:43:26.321073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1399450 with addr=10.0.0.2, port=4420 00:18:31.033 [2024-12-05 06:43:26.321084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399450 is same with the state(5) to be set 00:18:31.033 [2024-12-05 06:43:26.321107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1399450 (9): Bad file descriptor 00:18:31.033 [2024-12-05 06:43:26.321124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:31.033 [2024-12-05 06:43:26.321133] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:31.033 [2024-12-05 06:43:26.321142] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:31.033 [2024-12-05 06:43:26.321166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:31.033 [2024-12-05 06:43:26.321176] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:31.033 06:43:26 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:31.290 [2024-12-05 06:43:26.586868] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.291 06:43:26 -- host/timeout.sh@92 -- # wait 85299 00:18:32.225 [2024-12-05 06:43:27.337587] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:38.787 00:18:38.787 Latency(us) 00:18:38.787 [2024-12-05T06:43:34.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.787 [2024-12-05T06:43:34.253Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:38.787 Verification LBA range: start 0x0 length 0x4000 00:18:38.787 NVMe0n1 : 10.01 9942.69 38.84 0.00 0.00 12850.68 997.93 3019898.88 00:18:38.787 [2024-12-05T06:43:34.253Z] =================================================================================================================== 00:18:38.787 [2024-12-05T06:43:34.253Z] Total : 9942.69 38.84 0.00 0.00 12850.68 997.93 3019898.88 00:18:38.787 0 00:18:38.787 06:43:34 -- host/timeout.sh@97 -- # rpc_pid=85409 00:18:38.787 06:43:34 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:38.787 06:43:34 -- host/timeout.sh@98 -- # sleep 1 00:18:39.047 Running I/O for 10 seconds... 00:18:39.982 06:43:35 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:40.242 [2024-12-05 06:43:35.463178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db7d80 is same with the state(5) to be set 00:18:40.242 [2024-12-05 06:43:35.463468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db7d80 is same with the state(5) to be set 00:18:40.242 [2024-12-05 06:43:35.463607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db7d80 is same with the state(5) to be set 00:18:40.242 [2024-12-05 06:43:35.463725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db7d80 is same with the state(5) to be set 00:18:40.242 [2024-12-05 06:43:35.463862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db7d80 is same with the state(5) to be set 00:18:40.242 [2024-12-05 06:43:35.464041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db7d80 is same with the state(5) to be set 00:18:40.242 [2024-12-05 06:43:35.464174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db7d80 is same with the state(5) to be set 00:18:40.242 [2024-12-05 06:43:35.464415] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db7d80 is same with the state(5) to be set 00:18:40.242 [2024-12-05 06:43:35.464471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db7d80 is same with the state(5) to be set 00:18:40.242 [2024-12-05 06:43:35.464660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db7d80 is same with the state(5) to be set 00:18:40.242 [2024-12-05 06:43:35.464720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db7d80 is same with [2024-12-05 06:43:35.464782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.242 [2024-12-05 06:43:35.464821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.464835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.242 [2024-12-05 06:43:35.464844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.464854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.242 [2024-12-05 06:43:35.464863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.464874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:40.242 [2024-12-05 06:43:35.464883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.464893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399450 is same with the state(5) to be set 00:18:40.242 the state(5) to be set 00:18:40.242 [2024-12-05 06:43:35.465026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db7d80 is same with the state(5) to be set 00:18:40.242 [2024-12-05 06:43:35.465223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.465410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.465552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.465619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.465770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.465988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.466160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.466185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.466207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.466227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.466249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.466271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.466292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.466326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.466351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.466372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.242 [2024-12-05 06:43:35.466394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.242 [2024-12-05 06:43:35.466414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.466435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.466458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.466479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.242 [2024-12-05 06:43:35.466500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.242 [2024-12-05 06:43:35.466520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.242 [2024-12-05 06:43:35.466532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.243 [2024-12-05 06:43:35.466542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.466565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.243 [2024-12-05 06:43:35.466586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.466607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.243 [2024-12-05 06:43:35.466628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.466649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.466670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.466690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.466710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.466732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.466753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.466773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.466794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.466815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.243 [2024-12-05 06:43:35.466835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.243 [2024-12-05 06:43:35.466857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.243 [2024-12-05 06:43:35.466877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.466898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.466919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.243 [2024-12-05 06:43:35.466940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.466960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.243 [2024-12-05 06:43:35.466982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.466994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.467003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.243 [2024-12-05 06:43:35.467024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.467045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.243 [2024-12-05 06:43:35.467065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.243 [2024-12-05 06:43:35.467086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.467107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.243 [2024-12-05 06:43:35.467128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.467149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.243 [2024-12-05 06:43:35.467170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.243 [2024-12-05 06:43:35.467191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.467211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.467233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.467254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.467275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.467310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.467343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.467364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.243 [2024-12-05 06:43:35.467385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.243 [2024-12-05 06:43:35.467396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.244 [2024-12-05 06:43:35.467405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.244 [2024-12-05 06:43:35.467426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.244 [2024-12-05 06:43:35.467447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.244 [2024-12-05 06:43:35.467468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.244 [2024-12-05 06:43:35.467489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.244 [2024-12-05 06:43:35.467510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.244 [2024-12-05 06:43:35.467532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.244 [2024-12-05 06:43:35.467552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.244 [2024-12-05 06:43:35.467588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.244 [2024-12-05 06:43:35.467609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.244 [2024-12-05 06:43:35.467650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.244 [2024-12-05 06:43:35.467669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.244 [2024-12-05 06:43:35.467689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.244 [2024-12-05 06:43:35.467708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.244 [2024-12-05 06:43:35.467727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.244 [2024-12-05 06:43:35.467747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.244 [2024-12-05 06:43:35.467757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.269 [2024-12-05 06:43:35.467766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.269 [2024-12-05 06:43:35.467776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.269 [2024-12-05 06:43:35.467785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.269 [2024-12-05 06:43:35.467796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.269 [2024-12-05 06:43:35.467805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.269 [2024-12-05 06:43:35.467816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.269 [2024-12-05 06:43:35.467825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.269 [2024-12-05 06:43:35.467835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.269 [2024-12-05 06:43:35.467845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.269 [2024-12-05 06:43:35.467856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.269 [2024-12-05 06:43:35.467865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.269 [2024-12-05 06:43:35.467875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.269 [2024-12-05 06:43:35.467884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.269 [2024-12-05 06:43:35.467895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.269 [2024-12-05 06:43:35.467903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.269 [2024-12-05 06:43:35.467914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.269 [2024-12-05 06:43:35.467923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.269 [2024-12-05 06:43:35.467933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.269 [2024-12-05 06:43:35.467943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.467953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.467962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.467973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.467982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.467992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.468001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.468022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.468041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.468061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.468081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.468100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.468119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.468138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.468158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.468178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.468197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.468217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.468236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.468256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.468276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.468296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.468316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.468336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.468660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.468726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.469043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.469100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.469155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.469280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.469362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.469466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.469585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.469785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.469908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.469973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.470096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.470213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.470280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.470475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.470546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.470671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.470726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.470779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.470912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.470965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.471030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:40.270 [2024-12-05 06:43:35.471161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.471200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.471211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.471224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.471234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.471245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.471254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.471265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.471275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.471299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.471311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.471335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.471345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.471356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.270 [2024-12-05 06:43:35.471365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.270 [2024-12-05 06:43:35.471377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.271 [2024-12-05 06:43:35.471386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.271 [2024-12-05 06:43:35.471397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.271 [2024-12-05 06:43:35.471407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.271 [2024-12-05 06:43:35.471417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144c130 is same with the state(5) to be set 00:18:40.271 [2024-12-05 06:43:35.471430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:40.271 [2024-12-05 06:43:35.471438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:40.271 [2024-12-05 06:43:35.471447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:8 PRP1 0x0 PRP2 0x0 00:18:40.271 [2024-12-05 06:43:35.471456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:40.271 [2024-12-05 06:43:35.471497] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x144c130 was disconnected and freed. reset controller. 00:18:40.271 [2024-12-05 06:43:35.471759] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:40.271 [2024-12-05 06:43:35.471783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1399450 (9): Bad file descriptor 00:18:40.271 [2024-12-05 06:43:35.471880] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.271 [2024-12-05 06:43:35.471933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.271 [2024-12-05 06:43:35.471972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.271 [2024-12-05 06:43:35.471987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1399450 with addr=10.0.0.2, port=4420 00:18:40.271 [2024-12-05 06:43:35.471997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399450 is same with the state(5) to be set 00:18:40.271 [2024-12-05 06:43:35.472016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1399450 (9): Bad file descriptor 00:18:40.271 [2024-12-05 06:43:35.472032] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:40.271 [2024-12-05 06:43:35.472041] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:40.271 [2024-12-05 06:43:35.472051] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:40.271 [2024-12-05 06:43:35.472070] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:40.271 [2024-12-05 06:43:35.472081] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:40.271 06:43:35 -- host/timeout.sh@101 -- # sleep 3 00:18:41.206 [2024-12-05 06:43:36.472177] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:41.206 [2024-12-05 06:43:36.472497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:41.206 [2024-12-05 06:43:36.472550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:41.206 [2024-12-05 06:43:36.472568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1399450 with addr=10.0.0.2, port=4420 00:18:41.206 [2024-12-05 06:43:36.472580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399450 is same with the state(5) to be set 00:18:41.206 [2024-12-05 06:43:36.472619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1399450 (9): Bad file descriptor 00:18:41.206 [2024-12-05 06:43:36.472639] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:41.206 [2024-12-05 06:43:36.472648] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:41.206 [2024-12-05 06:43:36.472659] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:41.206 [2024-12-05 06:43:36.472702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:41.206 [2024-12-05 06:43:36.472713] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:42.143 [2024-12-05 06:43:37.472819] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.143 [2024-12-05 06:43:37.472914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.143 [2024-12-05 06:43:37.472953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.143 [2024-12-05 06:43:37.472968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1399450 with addr=10.0.0.2, port=4420 00:18:42.143 [2024-12-05 06:43:37.472979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399450 is same with the state(5) to be set 00:18:42.143 [2024-12-05 06:43:37.473003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1399450 (9): Bad file descriptor 00:18:42.143 [2024-12-05 06:43:37.473020] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:42.143 [2024-12-05 06:43:37.473029] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:42.143 [2024-12-05 06:43:37.473039] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:42.143 [2024-12-05 06:43:37.473063] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:42.143 [2024-12-05 06:43:37.473075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:43.081 [2024-12-05 06:43:38.475079] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:43.081 [2024-12-05 06:43:38.475177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:43.081 [2024-12-05 06:43:38.475216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:43.081 [2024-12-05 06:43:38.475231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1399450 with addr=10.0.0.2, port=4420 00:18:43.081 [2024-12-05 06:43:38.475259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399450 is same with the state(5) to be set 00:18:43.081 [2024-12-05 06:43:38.475443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1399450 (9): Bad file descriptor 00:18:43.081 [2024-12-05 06:43:38.475590] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:43.081 [2024-12-05 06:43:38.475603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:43.081 [2024-12-05 06:43:38.475614] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:43.081 [2024-12-05 06:43:38.477998] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:43.081 [2024-12-05 06:43:38.478027] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:43.081 06:43:38 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.370 [2024-12-05 06:43:38.756162] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.370 06:43:38 -- host/timeout.sh@103 -- # wait 85409 00:18:44.328 [2024-12-05 06:43:39.507498] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:49.599 00:18:49.599 Latency(us) 00:18:49.599 [2024-12-05T06:43:45.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.599 [2024-12-05T06:43:45.065Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:49.599 Verification LBA range: start 0x0 length 0x4000 00:18:49.599 NVMe0n1 : 10.01 8535.91 33.34 5927.64 0.00 8830.85 580.89 3019898.88 00:18:49.599 [2024-12-05T06:43:45.065Z] =================================================================================================================== 00:18:49.599 [2024-12-05T06:43:45.065Z] Total : 8535.91 33.34 5927.64 0.00 8830.85 0.00 3019898.88 00:18:49.599 0 00:18:49.599 06:43:44 -- host/timeout.sh@105 -- # killprocess 85281 00:18:49.599 06:43:44 -- common/autotest_common.sh@936 -- # '[' -z 85281 ']' 00:18:49.599 06:43:44 -- common/autotest_common.sh@940 -- # kill -0 85281 00:18:49.599 06:43:44 -- common/autotest_common.sh@941 -- # uname 00:18:49.599 06:43:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:49.599 06:43:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85281 00:18:49.599 killing process with pid 85281 00:18:49.599 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.599 00:18:49.599 Latency(us) 00:18:49.599 [2024-12-05T06:43:45.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.599 [2024-12-05T06:43:45.065Z] =================================================================================================================== 00:18:49.599 [2024-12-05T06:43:45.065Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.599 06:43:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:49.599 06:43:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:49.599 06:43:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85281' 00:18:49.599 06:43:44 -- common/autotest_common.sh@955 -- # kill 85281 00:18:49.599 06:43:44 -- common/autotest_common.sh@960 -- # wait 85281 00:18:49.599 06:43:44 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:49.599 06:43:44 -- host/timeout.sh@110 -- # bdevperf_pid=85522 00:18:49.599 06:43:44 -- host/timeout.sh@112 -- # waitforlisten 85522 /var/tmp/bdevperf.sock 00:18:49.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.599 06:43:44 -- common/autotest_common.sh@829 -- # '[' -z 85522 ']' 00:18:49.599 06:43:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.599 06:43:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.599 06:43:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.599 06:43:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.599 06:43:44 -- common/autotest_common.sh@10 -- # set +x 00:18:49.599 [2024-12-05 06:43:44.560305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:49.599 [2024-12-05 06:43:44.561015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85522 ] 00:18:49.599 [2024-12-05 06:43:44.696036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.599 [2024-12-05 06:43:44.730538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.165 06:43:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.165 06:43:45 -- common/autotest_common.sh@862 -- # return 0 00:18:50.165 06:43:45 -- host/timeout.sh@116 -- # dtrace_pid=85534 00:18:50.165 06:43:45 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 85522 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:50.165 06:43:45 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:50.421 06:43:45 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:50.678 NVMe0n1 00:18:50.937 06:43:46 -- host/timeout.sh@124 -- # rpc_pid=85581 00:18:50.937 06:43:46 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:50.937 06:43:46 -- host/timeout.sh@125 -- # sleep 1 00:18:50.937 Running I/O for 10 seconds... 00:18:51.875 06:43:47 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.137 [2024-12-05 06:43:47.404813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405208] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405279] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405287] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405295] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405318] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405353] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405369] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405418] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405435] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405443] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405497] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405505] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405528] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.137 [2024-12-05 06:43:47.405543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405559] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405590] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405613] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405620] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405659] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405761] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405768] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405818] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405825] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405832] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405839] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405861] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405917] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405984] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405991] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.405998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406020] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406049] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406078] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-05 06:43:47.406085] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 id:0 cdw10:00000000 cdw11:00000000 00:18:52.138 [2024-12-05 06:43:47.406094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.138 [2024-12-05 06:43:47.406109] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.138 [2024-12-05 06:43:47.406117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.138 [2024-12-05 06:43:47.406125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.138 [2024-12-05 06:43:47.406133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with [2024-12-05 06:43:47.406133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:18:52.138 id:0 cdw10:00000000 cdw11:00000000 00:18:52.138 [2024-12-05 06:43:47.406140] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.139 [2024-12-05 06:43:47.406142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406148] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68c20 is same with the state(5) to be set 00:18:52.139 [2024-12-05 06:43:47.406151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.139 [2024-12-05 06:43:47.406160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1553470 is same with the state(5) to be set 00:18:52.139 [2024-12-05 06:43:47.406220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.139 [2024-12-05 06:43:47.406974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.139 [2024-12-05 06:43:47.406986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.406995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.140 [2024-12-05 06:43:47.407794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.140 [2024-12-05 06:43:47.407803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.407818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.407827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.407838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.407847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.407859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.407868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.407880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.407889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.407900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.407909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.407920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.407928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.407939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.407948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.407960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.407969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.407981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.407990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.141 [2024-12-05 06:43:47.408587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.141 [2024-12-05 06:43:47.408598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.142 [2024-12-05 06:43:47.408930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.408943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154e9f0 is same with the state(5) to be set 00:18:52.142 [2024-12-05 06:43:47.408955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.142 [2024-12-05 06:43:47.408963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.142 [2024-12-05 06:43:47.408971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60424 len:8 PRP1 0x0 PRP2 0x0 00:18:52.142 [2024-12-05 06:43:47.408980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.142 [2024-12-05 06:43:47.409022] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x154e9f0 was disconnected and freed. reset controller. 00:18:52.142 [2024-12-05 06:43:47.409304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:52.142 [2024-12-05 06:43:47.409349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1553470 (9): Bad file descriptor 00:18:52.142 [2024-12-05 06:43:47.409454] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.142 [2024-12-05 06:43:47.409518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.142 [2024-12-05 06:43:47.409560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.142 [2024-12-05 06:43:47.409576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1553470 with addr=10.0.0.2, port=4420 00:18:52.142 [2024-12-05 06:43:47.409587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1553470 is same with the state(5) to be set 00:18:52.142 [2024-12-05 06:43:47.409606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1553470 (9): Bad file descriptor 00:18:52.142 [2024-12-05 06:43:47.409623] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:52.142 [2024-12-05 06:43:47.409632] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:52.142 [2024-12-05 06:43:47.421980] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:52.142 [2024-12-05 06:43:47.422027] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:52.142 [2024-12-05 06:43:47.422041] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:52.142 06:43:47 -- host/timeout.sh@128 -- # wait 85581 00:18:54.047 [2024-12-05 06:43:49.422209] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:54.047 [2024-12-05 06:43:49.422329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:54.047 [2024-12-05 06:43:49.422387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:54.047 [2024-12-05 06:43:49.422403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1553470 with addr=10.0.0.2, port=4420 00:18:54.047 [2024-12-05 06:43:49.422416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1553470 is same with the state(5) to be set 00:18:54.047 [2024-12-05 06:43:49.422442] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1553470 (9): Bad file descriptor 00:18:54.047 [2024-12-05 06:43:49.422460] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:54.047 [2024-12-05 06:43:49.422469] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:54.047 [2024-12-05 06:43:49.422480] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:54.047 [2024-12-05 06:43:49.422505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:54.047 [2024-12-05 06:43:49.422515] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:56.580 [2024-12-05 06:43:51.422701] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.580 [2024-12-05 06:43:51.422801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.580 [2024-12-05 06:43:51.422843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.580 [2024-12-05 06:43:51.422860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1553470 with addr=10.0.0.2, port=4420 00:18:56.580 [2024-12-05 06:43:51.422872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1553470 is same with the state(5) to be set 00:18:56.580 [2024-12-05 06:43:51.422898] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1553470 (9): Bad file descriptor 00:18:56.580 [2024-12-05 06:43:51.422916] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:56.580 [2024-12-05 06:43:51.422925] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:56.580 [2024-12-05 06:43:51.422935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:56.580 [2024-12-05 06:43:51.422986] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:56.580 [2024-12-05 06:43:51.423021] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.027 [2024-12-05 06:43:53.423109] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:58.027 [2024-12-05 06:43:53.423206] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:58.027 [2024-12-05 06:43:53.423217] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:58.027 [2024-12-05 06:43:53.423227] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:58.027 [2024-12-05 06:43:53.423256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:58.965 00:18:58.965 Latency(us) 00:18:58.965 [2024-12-05T06:43:54.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.965 [2024-12-05T06:43:54.431Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:58.965 NVMe0n1 : 8.16 2257.45 8.82 15.69 0.00 56222.00 7298.33 7015926.69 00:18:58.965 [2024-12-05T06:43:54.431Z] =================================================================================================================== 00:18:58.965 [2024-12-05T06:43:54.431Z] Total : 2257.45 8.82 15.69 0.00 56222.00 7298.33 7015926.69 00:18:58.965 0 00:18:59.225 06:43:54 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:59.225 Attaching 5 probes... 00:18:59.225 1392.543490: reset bdev controller NVMe0 00:18:59.225 1392.637951: reconnect bdev controller NVMe0 00:18:59.225 3405.318314: reconnect delay bdev controller NVMe0 00:18:59.225 3405.354399: reconnect bdev controller NVMe0 00:18:59.225 5405.791874: reconnect delay bdev controller NVMe0 00:18:59.225 5405.827786: reconnect bdev controller NVMe0 00:18:59.225 7406.318369: reconnect delay bdev controller NVMe0 00:18:59.225 7406.355944: reconnect bdev controller NVMe0 00:18:59.225 06:43:54 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:59.225 06:43:54 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:59.225 06:43:54 -- host/timeout.sh@136 -- # kill 85534 00:18:59.225 06:43:54 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:59.225 06:43:54 -- host/timeout.sh@139 -- # killprocess 85522 00:18:59.225 06:43:54 -- common/autotest_common.sh@936 -- # '[' -z 85522 ']' 00:18:59.225 06:43:54 -- common/autotest_common.sh@940 -- # kill -0 85522 00:18:59.225 06:43:54 -- common/autotest_common.sh@941 -- # uname 00:18:59.225 06:43:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:59.225 06:43:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85522 00:18:59.225 killing process with pid 85522 00:18:59.225 Received shutdown signal, test time was about 8.234683 seconds 00:18:59.225 00:18:59.225 Latency(us) 00:18:59.225 [2024-12-05T06:43:54.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.225 [2024-12-05T06:43:54.691Z] =================================================================================================================== 00:18:59.225 [2024-12-05T06:43:54.691Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.225 06:43:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:59.225 06:43:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:59.225 06:43:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85522' 00:18:59.225 06:43:54 -- common/autotest_common.sh@955 -- # kill 85522 00:18:59.225 06:43:54 -- common/autotest_common.sh@960 -- # wait 85522 00:18:59.225 06:43:54 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:59.484 06:43:54 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:59.484 06:43:54 -- host/timeout.sh@145 -- # nvmftestfini 00:18:59.484 06:43:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:59.484 06:43:54 -- nvmf/common.sh@116 -- # sync 00:18:59.484 06:43:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:59.484 06:43:54 -- nvmf/common.sh@119 -- # set +e 00:18:59.484 06:43:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:59.484 06:43:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:59.484 rmmod nvme_tcp 00:18:59.484 rmmod nvme_fabrics 00:18:59.484 rmmod nvme_keyring 00:18:59.484 06:43:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:59.484 06:43:54 -- nvmf/common.sh@123 -- # set -e 00:18:59.484 06:43:54 -- nvmf/common.sh@124 -- # return 0 00:18:59.484 06:43:54 -- nvmf/common.sh@477 -- # '[' -n 85101 ']' 00:18:59.484 06:43:54 -- nvmf/common.sh@478 -- # killprocess 85101 00:18:59.484 06:43:54 -- common/autotest_common.sh@936 -- # '[' -z 85101 ']' 00:18:59.484 06:43:54 -- common/autotest_common.sh@940 -- # kill -0 85101 00:18:59.484 06:43:54 -- common/autotest_common.sh@941 -- # uname 00:18:59.484 06:43:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:59.484 06:43:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85101 00:18:59.743 killing process with pid 85101 00:18:59.743 06:43:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:59.743 06:43:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:59.743 06:43:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85101' 00:18:59.743 06:43:54 -- common/autotest_common.sh@955 -- # kill 85101 00:18:59.743 06:43:54 -- common/autotest_common.sh@960 -- # wait 85101 00:18:59.743 06:43:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:59.743 06:43:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:59.743 06:43:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:59.743 06:43:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:59.743 06:43:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:59.743 06:43:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.743 06:43:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.743 06:43:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.743 06:43:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:59.743 00:18:59.743 real 0m45.477s 00:18:59.743 user 2m14.389s 00:18:59.743 sys 0m5.183s 00:18:59.743 06:43:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:59.743 06:43:55 -- common/autotest_common.sh@10 -- # set +x 00:18:59.743 ************************************ 00:18:59.743 END TEST nvmf_timeout 00:18:59.743 ************************************ 00:19:00.002 06:43:55 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:19:00.002 06:43:55 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:19:00.002 06:43:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:00.002 06:43:55 -- common/autotest_common.sh@10 -- # set +x 00:19:00.002 06:43:55 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:19:00.002 ************************************ 00:19:00.002 END TEST nvmf_tcp 00:19:00.002 ************************************ 00:19:00.002 00:19:00.002 real 10m23.113s 00:19:00.002 user 29m10.426s 00:19:00.002 sys 3m22.498s 00:19:00.002 06:43:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:00.002 06:43:55 -- common/autotest_common.sh@10 -- # set +x 00:19:00.002 06:43:55 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:19:00.002 06:43:55 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:00.002 06:43:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:00.002 06:43:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:00.002 06:43:55 -- common/autotest_common.sh@10 -- # set +x 00:19:00.002 ************************************ 00:19:00.002 START TEST nvmf_dif 00:19:00.002 ************************************ 00:19:00.002 06:43:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:00.002 * Looking for test storage... 00:19:00.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:00.002 06:43:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:00.002 06:43:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:00.002 06:43:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:00.002 06:43:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:00.002 06:43:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:00.002 06:43:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:00.002 06:43:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:00.002 06:43:55 -- scripts/common.sh@335 -- # IFS=.-: 00:19:00.002 06:43:55 -- scripts/common.sh@335 -- # read -ra ver1 00:19:00.002 06:43:55 -- scripts/common.sh@336 -- # IFS=.-: 00:19:00.002 06:43:55 -- scripts/common.sh@336 -- # read -ra ver2 00:19:00.002 06:43:55 -- scripts/common.sh@337 -- # local 'op=<' 00:19:00.002 06:43:55 -- scripts/common.sh@339 -- # ver1_l=2 00:19:00.002 06:43:55 -- scripts/common.sh@340 -- # ver2_l=1 00:19:00.002 06:43:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:00.002 06:43:55 -- scripts/common.sh@343 -- # case "$op" in 00:19:00.002 06:43:55 -- scripts/common.sh@344 -- # : 1 00:19:00.002 06:43:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:00.002 06:43:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:00.002 06:43:55 -- scripts/common.sh@364 -- # decimal 1 00:19:00.002 06:43:55 -- scripts/common.sh@352 -- # local d=1 00:19:00.002 06:43:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:00.002 06:43:55 -- scripts/common.sh@354 -- # echo 1 00:19:00.002 06:43:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:00.002 06:43:55 -- scripts/common.sh@365 -- # decimal 2 00:19:00.002 06:43:55 -- scripts/common.sh@352 -- # local d=2 00:19:00.002 06:43:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:00.002 06:43:55 -- scripts/common.sh@354 -- # echo 2 00:19:00.002 06:43:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:00.002 06:43:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:00.002 06:43:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:00.002 06:43:55 -- scripts/common.sh@367 -- # return 0 00:19:00.002 06:43:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:00.002 06:43:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.002 --rc genhtml_branch_coverage=1 00:19:00.002 --rc genhtml_function_coverage=1 00:19:00.002 --rc genhtml_legend=1 00:19:00.002 --rc geninfo_all_blocks=1 00:19:00.002 --rc geninfo_unexecuted_blocks=1 00:19:00.002 00:19:00.002 ' 00:19:00.002 06:43:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.002 --rc genhtml_branch_coverage=1 00:19:00.002 --rc genhtml_function_coverage=1 00:19:00.002 --rc genhtml_legend=1 00:19:00.002 --rc geninfo_all_blocks=1 00:19:00.002 --rc geninfo_unexecuted_blocks=1 00:19:00.002 00:19:00.002 ' 00:19:00.002 06:43:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.002 --rc genhtml_branch_coverage=1 00:19:00.002 --rc genhtml_function_coverage=1 00:19:00.002 --rc genhtml_legend=1 00:19:00.002 --rc geninfo_all_blocks=1 00:19:00.002 --rc geninfo_unexecuted_blocks=1 00:19:00.002 00:19:00.002 ' 00:19:00.002 06:43:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.002 --rc genhtml_branch_coverage=1 00:19:00.002 --rc genhtml_function_coverage=1 00:19:00.002 --rc genhtml_legend=1 00:19:00.002 --rc geninfo_all_blocks=1 00:19:00.002 --rc geninfo_unexecuted_blocks=1 00:19:00.002 00:19:00.002 ' 00:19:00.002 06:43:55 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:00.002 06:43:55 -- nvmf/common.sh@7 -- # uname -s 00:19:00.261 06:43:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.261 06:43:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.261 06:43:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.261 06:43:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.261 06:43:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.261 06:43:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.261 06:43:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.261 06:43:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.261 06:43:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.261 06:43:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.261 06:43:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:19:00.261 06:43:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:19:00.261 06:43:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.261 06:43:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.261 06:43:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:00.261 06:43:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:00.261 06:43:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.261 06:43:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.261 06:43:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.261 06:43:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.261 06:43:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.261 06:43:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.261 06:43:55 -- paths/export.sh@5 -- # export PATH 00:19:00.261 06:43:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.261 06:43:55 -- nvmf/common.sh@46 -- # : 0 00:19:00.261 06:43:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:00.261 06:43:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:00.261 06:43:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:00.261 06:43:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.261 06:43:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.261 06:43:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:00.261 06:43:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:00.261 06:43:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:00.261 06:43:55 -- target/dif.sh@15 -- # NULL_META=16 00:19:00.261 06:43:55 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:00.261 06:43:55 -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:00.261 06:43:55 -- target/dif.sh@15 -- # NULL_DIF=1 00:19:00.261 06:43:55 -- target/dif.sh@135 -- # nvmftestinit 00:19:00.261 06:43:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:00.261 06:43:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.261 06:43:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:00.261 06:43:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:00.261 06:43:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:00.261 06:43:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.261 06:43:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:00.261 06:43:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.261 06:43:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:00.261 06:43:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:00.261 06:43:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:00.261 06:43:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:00.261 06:43:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:00.261 06:43:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:00.261 06:43:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.261 06:43:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.261 06:43:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:00.261 06:43:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:00.261 06:43:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:00.261 06:43:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:00.261 06:43:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:00.261 06:43:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.261 06:43:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:00.261 06:43:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:00.261 06:43:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:00.261 06:43:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:00.261 06:43:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:00.261 06:43:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:00.261 Cannot find device "nvmf_tgt_br" 00:19:00.261 06:43:55 -- nvmf/common.sh@154 -- # true 00:19:00.261 06:43:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:00.261 Cannot find device "nvmf_tgt_br2" 00:19:00.261 06:43:55 -- nvmf/common.sh@155 -- # true 00:19:00.261 06:43:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:00.261 06:43:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:00.261 Cannot find device "nvmf_tgt_br" 00:19:00.261 06:43:55 -- nvmf/common.sh@157 -- # true 00:19:00.261 06:43:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:00.261 Cannot find device "nvmf_tgt_br2" 00:19:00.261 06:43:55 -- nvmf/common.sh@158 -- # true 00:19:00.261 06:43:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:00.261 06:43:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:00.261 06:43:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:00.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.261 06:43:55 -- nvmf/common.sh@161 -- # true 00:19:00.261 06:43:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:00.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.261 06:43:55 -- nvmf/common.sh@162 -- # true 00:19:00.261 06:43:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:00.261 06:43:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:00.261 06:43:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:00.261 06:43:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:00.261 06:43:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:00.261 06:43:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:00.261 06:43:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:00.261 06:43:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:00.261 06:43:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:00.261 06:43:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:00.261 06:43:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:00.261 06:43:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:00.261 06:43:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:00.518 06:43:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:00.518 06:43:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:00.518 06:43:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:00.518 06:43:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:00.518 06:43:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:00.518 06:43:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:00.518 06:43:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:00.519 06:43:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:00.519 06:43:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:00.519 06:43:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:00.519 06:43:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:00.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:19:00.519 00:19:00.519 --- 10.0.0.2 ping statistics --- 00:19:00.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.519 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:00.519 06:43:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:00.519 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:00.519 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:19:00.519 00:19:00.519 --- 10.0.0.3 ping statistics --- 00:19:00.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.519 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:00.519 06:43:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:00.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:00.519 00:19:00.519 --- 10.0.0.1 ping statistics --- 00:19:00.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.519 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:00.519 06:43:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.519 06:43:55 -- nvmf/common.sh@421 -- # return 0 00:19:00.519 06:43:55 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:00.519 06:43:55 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:00.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:00.776 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:00.776 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:00.776 06:43:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.776 06:43:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:00.776 06:43:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:00.776 06:43:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.776 06:43:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:00.776 06:43:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:00.776 06:43:56 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:00.776 06:43:56 -- target/dif.sh@137 -- # nvmfappstart 00:19:00.776 06:43:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:00.776 06:43:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:00.776 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:19:00.776 06:43:56 -- nvmf/common.sh@469 -- # nvmfpid=86032 00:19:00.776 06:43:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:00.776 06:43:56 -- nvmf/common.sh@470 -- # waitforlisten 86032 00:19:00.776 06:43:56 -- common/autotest_common.sh@829 -- # '[' -z 86032 ']' 00:19:00.776 06:43:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.776 06:43:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.776 06:43:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.776 06:43:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.776 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:19:01.034 [2024-12-05 06:43:56.280275] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:01.034 [2024-12-05 06:43:56.280381] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.034 [2024-12-05 06:43:56.411368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.034 [2024-12-05 06:43:56.444398] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:01.034 [2024-12-05 06:43:56.444588] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.034 [2024-12-05 06:43:56.444602] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.034 [2024-12-05 06:43:56.444611] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.034 [2024-12-05 06:43:56.444635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.293 06:43:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:01.293 06:43:56 -- common/autotest_common.sh@862 -- # return 0 00:19:01.293 06:43:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:01.293 06:43:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:01.293 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:19:01.293 06:43:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.293 06:43:56 -- target/dif.sh@139 -- # create_transport 00:19:01.293 06:43:56 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:01.293 06:43:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.293 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:19:01.293 [2024-12-05 06:43:56.584471] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.293 06:43:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.293 06:43:56 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:01.293 06:43:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:01.293 06:43:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:01.293 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:19:01.293 ************************************ 00:19:01.293 START TEST fio_dif_1_default 00:19:01.293 ************************************ 00:19:01.293 06:43:56 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:19:01.293 06:43:56 -- target/dif.sh@86 -- # create_subsystems 0 00:19:01.293 06:43:56 -- target/dif.sh@28 -- # local sub 00:19:01.293 06:43:56 -- target/dif.sh@30 -- # for sub in "$@" 00:19:01.293 06:43:56 -- target/dif.sh@31 -- # create_subsystem 0 00:19:01.293 06:43:56 -- target/dif.sh@18 -- # local sub_id=0 00:19:01.293 06:43:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:01.293 06:43:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.293 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:19:01.293 bdev_null0 00:19:01.293 06:43:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.293 06:43:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:01.293 06:43:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.293 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:19:01.293 06:43:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.293 06:43:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:01.293 06:43:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.293 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:19:01.293 06:43:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.293 06:43:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:01.293 06:43:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.293 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:19:01.293 [2024-12-05 06:43:56.628621] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.293 06:43:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.293 06:43:56 -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:01.293 06:43:56 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:01.293 06:43:56 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:01.293 06:43:56 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:01.293 06:43:56 -- target/dif.sh@82 -- # gen_fio_conf 00:19:01.293 06:43:56 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:01.293 06:43:56 -- target/dif.sh@54 -- # local file 00:19:01.293 06:43:56 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:01.293 06:43:56 -- target/dif.sh@56 -- # cat 00:19:01.293 06:43:56 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:01.293 06:43:56 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.293 06:43:56 -- common/autotest_common.sh@1330 -- # shift 00:19:01.293 06:43:56 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:01.293 06:43:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:01.293 06:43:56 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:01.293 06:43:56 -- nvmf/common.sh@520 -- # config=() 00:19:01.293 06:43:56 -- nvmf/common.sh@520 -- # local subsystem config 00:19:01.293 06:43:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:01.293 06:43:56 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:01.293 06:43:56 -- target/dif.sh@72 -- # (( file <= files )) 00:19:01.293 06:43:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.293 06:43:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:01.293 { 00:19:01.293 "params": { 00:19:01.293 "name": "Nvme$subsystem", 00:19:01.293 "trtype": "$TEST_TRANSPORT", 00:19:01.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.293 "adrfam": "ipv4", 00:19:01.293 "trsvcid": "$NVMF_PORT", 00:19:01.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.294 "hdgst": ${hdgst:-false}, 00:19:01.294 "ddgst": ${ddgst:-false} 00:19:01.294 }, 00:19:01.294 "method": "bdev_nvme_attach_controller" 00:19:01.294 } 00:19:01.294 EOF 00:19:01.294 )") 00:19:01.294 06:43:56 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:01.294 06:43:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:01.294 06:43:56 -- nvmf/common.sh@542 -- # cat 00:19:01.294 06:43:56 -- nvmf/common.sh@544 -- # jq . 00:19:01.294 06:43:56 -- nvmf/common.sh@545 -- # IFS=, 00:19:01.294 06:43:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:01.294 "params": { 00:19:01.294 "name": "Nvme0", 00:19:01.294 "trtype": "tcp", 00:19:01.294 "traddr": "10.0.0.2", 00:19:01.294 "adrfam": "ipv4", 00:19:01.294 "trsvcid": "4420", 00:19:01.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:01.294 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:01.294 "hdgst": false, 00:19:01.294 "ddgst": false 00:19:01.294 }, 00:19:01.294 "method": "bdev_nvme_attach_controller" 00:19:01.294 }' 00:19:01.294 06:43:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:01.294 06:43:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:01.294 06:43:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:01.294 06:43:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.294 06:43:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:01.294 06:43:56 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:01.294 06:43:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:01.294 06:43:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:01.294 06:43:56 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:01.294 06:43:56 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:01.553 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:01.553 fio-3.35 00:19:01.553 Starting 1 thread 00:19:01.811 [2024-12-05 06:43:57.177142] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:01.811 [2024-12-05 06:43:57.177225] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:14.016 00:19:14.016 filename0: (groupid=0, jobs=1): err= 0: pid=86087: Thu Dec 5 06:44:07 2024 00:19:14.016 read: IOPS=9527, BW=37.2MiB/s (39.0MB/s)(372MiB/10001msec) 00:19:14.016 slat (usec): min=5, max=1756, avg= 8.03, stdev= 6.72 00:19:14.016 clat (usec): min=305, max=5500, avg=396.15, stdev=56.41 00:19:14.016 lat (usec): min=311, max=5527, avg=404.19, stdev=57.50 00:19:14.016 clat percentiles (usec): 00:19:14.016 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 359], 00:19:14.017 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 388], 60.00th=[ 396], 00:19:14.017 | 70.00th=[ 412], 80.00th=[ 433], 90.00th=[ 461], 95.00th=[ 486], 00:19:14.017 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 578], 99.95th=[ 627], 00:19:14.017 | 99.99th=[ 775] 00:19:14.017 bw ( KiB/s): min=35136, max=40096, per=100.00%, avg=38138.95, stdev=1345.91, samples=19 00:19:14.017 iops : min= 8784, max=10024, avg=9534.74, stdev=336.48, samples=19 00:19:14.017 lat (usec) : 500=97.17%, 750=2.82%, 1000=0.01% 00:19:14.017 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:19:14.017 cpu : usr=83.81%, sys=14.25%, ctx=192, majf=0, minf=8 00:19:14.017 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.017 issued rwts: total=95287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.017 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:14.017 00:19:14.017 Run status group 0 (all jobs): 00:19:14.017 READ: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=372MiB (390MB), run=10001-10001msec 00:19:14.017 06:44:07 -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:14.017 06:44:07 -- target/dif.sh@43 -- # local sub 00:19:14.017 06:44:07 -- target/dif.sh@45 -- # for sub in "$@" 00:19:14.017 06:44:07 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:14.017 06:44:07 -- target/dif.sh@36 -- # local sub_id=0 00:19:14.017 06:44:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:14.017 06:44:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.017 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:19:14.017 06:44:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.017 06:44:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:14.017 06:44:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.017 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:19:14.017 06:44:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.017 00:19:14.017 real 0m10.867s 00:19:14.017 user 0m8.942s 00:19:14.017 sys 0m1.646s 00:19:14.017 06:44:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:14.017 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:19:14.017 ************************************ 00:19:14.017 END TEST fio_dif_1_default 00:19:14.017 ************************************ 00:19:14.017 06:44:07 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:14.017 06:44:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:14.017 06:44:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:14.017 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:19:14.017 ************************************ 00:19:14.017 START TEST fio_dif_1_multi_subsystems 00:19:14.017 ************************************ 00:19:14.017 06:44:07 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:19:14.017 06:44:07 -- target/dif.sh@92 -- # local files=1 00:19:14.017 06:44:07 -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:14.017 06:44:07 -- target/dif.sh@28 -- # local sub 00:19:14.017 06:44:07 -- target/dif.sh@30 -- # for sub in "$@" 00:19:14.017 06:44:07 -- target/dif.sh@31 -- # create_subsystem 0 00:19:14.017 06:44:07 -- target/dif.sh@18 -- # local sub_id=0 00:19:14.017 06:44:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:14.017 06:44:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.017 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:19:14.017 bdev_null0 00:19:14.017 06:44:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.017 06:44:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:14.017 06:44:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.017 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:19:14.017 06:44:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.017 06:44:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:14.017 06:44:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.017 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:19:14.017 06:44:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.017 06:44:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:14.017 06:44:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.017 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:19:14.017 [2024-12-05 06:44:07.550617] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.017 06:44:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.017 06:44:07 -- target/dif.sh@30 -- # for sub in "$@" 00:19:14.017 06:44:07 -- target/dif.sh@31 -- # create_subsystem 1 00:19:14.017 06:44:07 -- target/dif.sh@18 -- # local sub_id=1 00:19:14.017 06:44:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:14.017 06:44:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.017 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:19:14.017 bdev_null1 00:19:14.017 06:44:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.017 06:44:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:14.017 06:44:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.017 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:19:14.017 06:44:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.017 06:44:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:14.017 06:44:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.017 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:19:14.017 06:44:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.017 06:44:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:14.017 06:44:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.017 06:44:07 -- common/autotest_common.sh@10 -- # set +x 00:19:14.017 06:44:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.017 06:44:07 -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:14.017 06:44:07 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:14.017 06:44:07 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:14.017 06:44:07 -- nvmf/common.sh@520 -- # config=() 00:19:14.017 06:44:07 -- nvmf/common.sh@520 -- # local subsystem config 00:19:14.017 06:44:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:14.017 06:44:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:14.017 { 00:19:14.017 "params": { 00:19:14.017 "name": "Nvme$subsystem", 00:19:14.017 "trtype": "$TEST_TRANSPORT", 00:19:14.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.017 "adrfam": "ipv4", 00:19:14.017 "trsvcid": "$NVMF_PORT", 00:19:14.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.017 "hdgst": ${hdgst:-false}, 00:19:14.017 "ddgst": ${ddgst:-false} 00:19:14.017 }, 00:19:14.017 "method": "bdev_nvme_attach_controller" 00:19:14.017 } 00:19:14.017 EOF 00:19:14.017 )") 00:19:14.017 06:44:07 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:14.017 06:44:07 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:14.017 06:44:07 -- target/dif.sh@82 -- # gen_fio_conf 00:19:14.017 06:44:07 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:14.017 06:44:07 -- target/dif.sh@54 -- # local file 00:19:14.017 06:44:07 -- target/dif.sh@56 -- # cat 00:19:14.017 06:44:07 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:14.017 06:44:07 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:14.017 06:44:07 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:14.017 06:44:07 -- common/autotest_common.sh@1330 -- # shift 00:19:14.017 06:44:07 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:14.017 06:44:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:14.017 06:44:07 -- nvmf/common.sh@542 -- # cat 00:19:14.017 06:44:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:14.017 06:44:07 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:14.017 06:44:07 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:14.017 06:44:07 -- target/dif.sh@72 -- # (( file <= files )) 00:19:14.017 06:44:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:14.017 06:44:07 -- target/dif.sh@73 -- # cat 00:19:14.017 06:44:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:14.017 06:44:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:14.017 { 00:19:14.017 "params": { 00:19:14.017 "name": "Nvme$subsystem", 00:19:14.017 "trtype": "$TEST_TRANSPORT", 00:19:14.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.017 "adrfam": "ipv4", 00:19:14.017 "trsvcid": "$NVMF_PORT", 00:19:14.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.017 "hdgst": ${hdgst:-false}, 00:19:14.017 "ddgst": ${ddgst:-false} 00:19:14.017 }, 00:19:14.017 "method": "bdev_nvme_attach_controller" 00:19:14.017 } 00:19:14.017 EOF 00:19:14.017 )") 00:19:14.017 06:44:07 -- target/dif.sh@72 -- # (( file++ )) 00:19:14.017 06:44:07 -- target/dif.sh@72 -- # (( file <= files )) 00:19:14.017 06:44:07 -- nvmf/common.sh@542 -- # cat 00:19:14.017 06:44:07 -- nvmf/common.sh@544 -- # jq . 00:19:14.017 06:44:07 -- nvmf/common.sh@545 -- # IFS=, 00:19:14.017 06:44:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:14.017 "params": { 00:19:14.017 "name": "Nvme0", 00:19:14.017 "trtype": "tcp", 00:19:14.017 "traddr": "10.0.0.2", 00:19:14.017 "adrfam": "ipv4", 00:19:14.017 "trsvcid": "4420", 00:19:14.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:14.017 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:14.017 "hdgst": false, 00:19:14.017 "ddgst": false 00:19:14.017 }, 00:19:14.017 "method": "bdev_nvme_attach_controller" 00:19:14.017 },{ 00:19:14.018 "params": { 00:19:14.018 "name": "Nvme1", 00:19:14.018 "trtype": "tcp", 00:19:14.018 "traddr": "10.0.0.2", 00:19:14.018 "adrfam": "ipv4", 00:19:14.018 "trsvcid": "4420", 00:19:14.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.018 "hdgst": false, 00:19:14.018 "ddgst": false 00:19:14.018 }, 00:19:14.018 "method": "bdev_nvme_attach_controller" 00:19:14.018 }' 00:19:14.018 06:44:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:14.018 06:44:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:14.018 06:44:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:14.018 06:44:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:14.018 06:44:07 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:14.018 06:44:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:14.018 06:44:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:14.018 06:44:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:14.018 06:44:07 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:14.018 06:44:07 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:14.018 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:14.018 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:14.018 fio-3.35 00:19:14.018 Starting 2 threads 00:19:14.018 [2024-12-05 06:44:08.191922] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:14.018 [2024-12-05 06:44:08.192015] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:24.037 00:19:24.037 filename0: (groupid=0, jobs=1): err= 0: pid=86253: Thu Dec 5 06:44:18 2024 00:19:24.037 read: IOPS=5120, BW=20.0MiB/s (21.0MB/s)(200MiB/10001msec) 00:19:24.037 slat (nsec): min=6202, max=78121, avg=13399.82, stdev=5064.81 00:19:24.037 clat (usec): min=606, max=3418, avg=743.80, stdev=64.11 00:19:24.037 lat (usec): min=615, max=3451, avg=757.20, stdev=64.83 00:19:24.037 clat percentiles (usec): 00:19:24.037 | 1.00th=[ 635], 5.00th=[ 660], 10.00th=[ 668], 20.00th=[ 693], 00:19:24.037 | 30.00th=[ 709], 40.00th=[ 717], 50.00th=[ 734], 60.00th=[ 750], 00:19:24.037 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 848], 00:19:24.037 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 938], 99.95th=[ 955], 00:19:24.037 | 99.99th=[ 1012] 00:19:24.037 bw ( KiB/s): min=19872, max=21184, per=50.01%, avg=20488.32, stdev=390.40, samples=19 00:19:24.037 iops : min= 4968, max= 5296, avg=5122.05, stdev=97.50, samples=19 00:19:24.037 lat (usec) : 750=58.33%, 1000=41.66% 00:19:24.037 lat (msec) : 2=0.01%, 4=0.01% 00:19:24.037 cpu : usr=90.68%, sys=7.94%, ctx=9, majf=0, minf=0 00:19:24.037 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.037 issued rwts: total=51212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.037 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:24.037 filename1: (groupid=0, jobs=1): err= 0: pid=86254: Thu Dec 5 06:44:18 2024 00:19:24.037 read: IOPS=5120, BW=20.0MiB/s (21.0MB/s)(200MiB/10001msec) 00:19:24.037 slat (nsec): min=6327, max=74264, avg=13247.39, stdev=5011.48 00:19:24.037 clat (usec): min=562, max=3412, avg=745.49, stdev=67.56 00:19:24.037 lat (usec): min=572, max=3439, avg=758.74, stdev=68.42 00:19:24.037 clat percentiles (usec): 00:19:24.037 | 1.00th=[ 619], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 693], 00:19:24.037 | 30.00th=[ 709], 40.00th=[ 725], 50.00th=[ 742], 60.00th=[ 758], 00:19:24.037 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 857], 00:19:24.037 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 955], 99.95th=[ 971], 00:19:24.037 | 99.99th=[ 1037] 00:19:24.037 bw ( KiB/s): min=19872, max=21184, per=50.01%, avg=20488.74, stdev=390.06, samples=19 00:19:24.037 iops : min= 4968, max= 5296, avg=5122.16, stdev=97.48, samples=19 00:19:24.037 lat (usec) : 750=56.63%, 1000=43.34% 00:19:24.037 lat (msec) : 2=0.02%, 4=0.01% 00:19:24.037 cpu : usr=90.21%, sys=8.40%, ctx=19, majf=0, minf=0 00:19:24.037 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.037 issued rwts: total=51212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.037 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:24.037 00:19:24.037 Run status group 0 (all jobs): 00:19:24.037 READ: bw=40.0MiB/s (41.9MB/s), 20.0MiB/s-20.0MiB/s (21.0MB/s-21.0MB/s), io=400MiB (420MB), run=10001-10001msec 00:19:24.037 06:44:18 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:24.037 06:44:18 -- target/dif.sh@43 -- # local sub 00:19:24.037 06:44:18 -- target/dif.sh@45 -- # for sub in "$@" 00:19:24.037 06:44:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:24.037 06:44:18 -- target/dif.sh@36 -- # local sub_id=0 00:19:24.037 06:44:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:24.037 06:44:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.037 06:44:18 -- common/autotest_common.sh@10 -- # set +x 00:19:24.037 06:44:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.037 06:44:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:24.037 06:44:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.037 06:44:18 -- common/autotest_common.sh@10 -- # set +x 00:19:24.037 06:44:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.037 06:44:18 -- target/dif.sh@45 -- # for sub in "$@" 00:19:24.037 06:44:18 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:24.037 06:44:18 -- target/dif.sh@36 -- # local sub_id=1 00:19:24.037 06:44:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.037 06:44:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.037 06:44:18 -- common/autotest_common.sh@10 -- # set +x 00:19:24.037 06:44:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.037 06:44:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:24.037 06:44:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.037 06:44:18 -- common/autotest_common.sh@10 -- # set +x 00:19:24.037 06:44:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.037 00:19:24.037 real 0m10.967s 00:19:24.037 user 0m18.701s 00:19:24.037 sys 0m1.888s 00:19:24.037 06:44:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:24.037 06:44:18 -- common/autotest_common.sh@10 -- # set +x 00:19:24.037 ************************************ 00:19:24.037 END TEST fio_dif_1_multi_subsystems 00:19:24.037 ************************************ 00:19:24.037 06:44:18 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:24.037 06:44:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:24.037 06:44:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:24.037 06:44:18 -- common/autotest_common.sh@10 -- # set +x 00:19:24.037 ************************************ 00:19:24.037 START TEST fio_dif_rand_params 00:19:24.037 ************************************ 00:19:24.037 06:44:18 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:19:24.037 06:44:18 -- target/dif.sh@100 -- # local NULL_DIF 00:19:24.037 06:44:18 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:24.037 06:44:18 -- target/dif.sh@103 -- # NULL_DIF=3 00:19:24.037 06:44:18 -- target/dif.sh@103 -- # bs=128k 00:19:24.037 06:44:18 -- target/dif.sh@103 -- # numjobs=3 00:19:24.037 06:44:18 -- target/dif.sh@103 -- # iodepth=3 00:19:24.037 06:44:18 -- target/dif.sh@103 -- # runtime=5 00:19:24.037 06:44:18 -- target/dif.sh@105 -- # create_subsystems 0 00:19:24.037 06:44:18 -- target/dif.sh@28 -- # local sub 00:19:24.037 06:44:18 -- target/dif.sh@30 -- # for sub in "$@" 00:19:24.037 06:44:18 -- target/dif.sh@31 -- # create_subsystem 0 00:19:24.037 06:44:18 -- target/dif.sh@18 -- # local sub_id=0 00:19:24.037 06:44:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:24.037 06:44:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.037 06:44:18 -- common/autotest_common.sh@10 -- # set +x 00:19:24.037 bdev_null0 00:19:24.037 06:44:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.037 06:44:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:24.037 06:44:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.037 06:44:18 -- common/autotest_common.sh@10 -- # set +x 00:19:24.037 06:44:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.037 06:44:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:24.037 06:44:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.038 06:44:18 -- common/autotest_common.sh@10 -- # set +x 00:19:24.038 06:44:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.038 06:44:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:24.038 06:44:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.038 06:44:18 -- common/autotest_common.sh@10 -- # set +x 00:19:24.038 [2024-12-05 06:44:18.578457] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.038 06:44:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.038 06:44:18 -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:24.038 06:44:18 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:24.038 06:44:18 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:24.038 06:44:18 -- nvmf/common.sh@520 -- # config=() 00:19:24.038 06:44:18 -- nvmf/common.sh@520 -- # local subsystem config 00:19:24.038 06:44:18 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:24.038 06:44:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:24.038 06:44:18 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:24.038 06:44:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:24.038 { 00:19:24.038 "params": { 00:19:24.038 "name": "Nvme$subsystem", 00:19:24.038 "trtype": "$TEST_TRANSPORT", 00:19:24.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.038 "adrfam": "ipv4", 00:19:24.038 "trsvcid": "$NVMF_PORT", 00:19:24.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.038 "hdgst": ${hdgst:-false}, 00:19:24.038 "ddgst": ${ddgst:-false} 00:19:24.038 }, 00:19:24.038 "method": "bdev_nvme_attach_controller" 00:19:24.038 } 00:19:24.038 EOF 00:19:24.038 )") 00:19:24.038 06:44:18 -- target/dif.sh@82 -- # gen_fio_conf 00:19:24.038 06:44:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:24.038 06:44:18 -- target/dif.sh@54 -- # local file 00:19:24.038 06:44:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:24.038 06:44:18 -- target/dif.sh@56 -- # cat 00:19:24.038 06:44:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:24.038 06:44:18 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.038 06:44:18 -- common/autotest_common.sh@1330 -- # shift 00:19:24.038 06:44:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:24.038 06:44:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:24.038 06:44:18 -- nvmf/common.sh@542 -- # cat 00:19:24.038 06:44:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.038 06:44:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:24.038 06:44:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:24.038 06:44:18 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:24.038 06:44:18 -- target/dif.sh@72 -- # (( file <= files )) 00:19:24.038 06:44:18 -- nvmf/common.sh@544 -- # jq . 00:19:24.038 06:44:18 -- nvmf/common.sh@545 -- # IFS=, 00:19:24.038 06:44:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:24.038 "params": { 00:19:24.038 "name": "Nvme0", 00:19:24.038 "trtype": "tcp", 00:19:24.038 "traddr": "10.0.0.2", 00:19:24.038 "adrfam": "ipv4", 00:19:24.038 "trsvcid": "4420", 00:19:24.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:24.038 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:24.038 "hdgst": false, 00:19:24.038 "ddgst": false 00:19:24.038 }, 00:19:24.038 "method": "bdev_nvme_attach_controller" 00:19:24.038 }' 00:19:24.038 06:44:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:24.038 06:44:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:24.038 06:44:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:24.038 06:44:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.038 06:44:18 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:24.038 06:44:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:24.038 06:44:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:24.038 06:44:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:24.038 06:44:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:24.038 06:44:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:24.038 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:24.038 ... 00:19:24.038 fio-3.35 00:19:24.038 Starting 3 threads 00:19:24.038 [2024-12-05 06:44:19.082857] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:24.038 [2024-12-05 06:44:19.082921] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:29.313 00:19:29.313 filename0: (groupid=0, jobs=1): err= 0: pid=86404: Thu Dec 5 06:44:24 2024 00:19:29.313 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(166MiB/5008msec) 00:19:29.313 slat (nsec): min=7345, max=55912, avg=15336.61, stdev=4805.53 00:19:29.313 clat (usec): min=7842, max=14170, avg=11270.56, stdev=408.62 00:19:29.313 lat (usec): min=7850, max=14195, avg=11285.90, stdev=408.99 00:19:29.313 clat percentiles (usec): 00:19:29.313 | 1.00th=[10421], 5.00th=[10683], 10.00th=[10814], 20.00th=[10945], 00:19:29.313 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11207], 60.00th=[11338], 00:19:29.313 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11731], 95.00th=[11863], 00:19:29.313 | 99.00th=[12125], 99.50th=[12125], 99.90th=[14222], 99.95th=[14222], 00:19:29.313 | 99.99th=[14222] 00:19:29.313 bw ( KiB/s): min=33024, max=34629, per=33.34%, avg=33952.50, stdev=495.81, samples=10 00:19:29.313 iops : min= 258, max= 270, avg=265.20, stdev= 3.79, samples=10 00:19:29.313 lat (msec) : 10=0.23%, 20=99.77% 00:19:29.313 cpu : usr=91.63%, sys=7.81%, ctx=7, majf=0, minf=9 00:19:29.313 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.313 issued rwts: total=1329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.313 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:29.313 filename0: (groupid=0, jobs=1): err= 0: pid=86405: Thu Dec 5 06:44:24 2024 00:19:29.313 read: IOPS=265, BW=33.1MiB/s (34.7MB/s)(166MiB/5003msec) 00:19:29.313 slat (nsec): min=6926, max=50579, avg=14474.05, stdev=5431.51 00:19:29.313 clat (usec): min=10297, max=16280, avg=11285.79, stdev=421.68 00:19:29.314 lat (usec): min=10304, max=16304, avg=11300.26, stdev=422.07 00:19:29.314 clat percentiles (usec): 00:19:29.314 | 1.00th=[10421], 5.00th=[10683], 10.00th=[10814], 20.00th=[11076], 00:19:29.314 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11207], 60.00th=[11338], 00:19:29.314 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11731], 95.00th=[11863], 00:19:29.314 | 99.00th=[12125], 99.50th=[12125], 99.90th=[16319], 99.95th=[16319], 00:19:29.314 | 99.99th=[16319] 00:19:29.314 bw ( KiB/s): min=33024, max=34560, per=33.26%, avg=33869.67, stdev=449.16, samples=9 00:19:29.314 iops : min= 258, max= 270, avg=264.56, stdev= 3.43, samples=9 00:19:29.314 lat (msec) : 20=100.00% 00:19:29.314 cpu : usr=91.86%, sys=7.62%, ctx=11, majf=0, minf=8 00:19:29.314 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.314 issued rwts: total=1326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.314 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:29.314 filename0: (groupid=0, jobs=1): err= 0: pid=86406: Thu Dec 5 06:44:24 2024 00:19:29.314 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(166MiB/5007msec) 00:19:29.314 slat (nsec): min=6634, max=45012, avg=15527.34, stdev=4951.69 00:19:29.314 clat (usec): min=7834, max=12903, avg=11267.08, stdev=391.17 00:19:29.314 lat (usec): min=7841, max=12933, avg=11282.61, stdev=391.64 00:19:29.314 clat percentiles (usec): 00:19:29.314 | 1.00th=[10421], 5.00th=[10683], 10.00th=[10814], 20.00th=[10945], 00:19:29.314 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11207], 60.00th=[11338], 00:19:29.314 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11731], 95.00th=[11863], 00:19:29.314 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12911], 99.95th=[12911], 00:19:29.314 | 99.99th=[12911] 00:19:29.314 bw ( KiB/s): min=33024, max=34560, per=33.34%, avg=33945.60, stdev=605.81, samples=10 00:19:29.314 iops : min= 258, max= 270, avg=265.20, stdev= 4.73, samples=10 00:19:29.314 lat (msec) : 10=0.23%, 20=99.77% 00:19:29.314 cpu : usr=92.19%, sys=7.27%, ctx=3, majf=0, minf=9 00:19:29.314 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.314 issued rwts: total=1329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.314 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:29.314 00:19:29.314 Run status group 0 (all jobs): 00:19:29.314 READ: bw=99.4MiB/s (104MB/s), 33.1MiB/s-33.2MiB/s (34.7MB/s-34.8MB/s), io=498MiB (522MB), run=5003-5008msec 00:19:29.314 06:44:24 -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:29.314 06:44:24 -- target/dif.sh@43 -- # local sub 00:19:29.314 06:44:24 -- target/dif.sh@45 -- # for sub in "$@" 00:19:29.314 06:44:24 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:29.314 06:44:24 -- target/dif.sh@36 -- # local sub_id=0 00:19:29.314 06:44:24 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:29.314 06:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.314 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:29.314 06:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.314 06:44:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:29.314 06:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.314 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:29.314 06:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.314 06:44:24 -- target/dif.sh@109 -- # NULL_DIF=2 00:19:29.314 06:44:24 -- target/dif.sh@109 -- # bs=4k 00:19:29.314 06:44:24 -- target/dif.sh@109 -- # numjobs=8 00:19:29.314 06:44:24 -- target/dif.sh@109 -- # iodepth=16 00:19:29.314 06:44:24 -- target/dif.sh@109 -- # runtime= 00:19:29.314 06:44:24 -- target/dif.sh@109 -- # files=2 00:19:29.314 06:44:24 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:29.314 06:44:24 -- target/dif.sh@28 -- # local sub 00:19:29.314 06:44:24 -- target/dif.sh@30 -- # for sub in "$@" 00:19:29.314 06:44:24 -- target/dif.sh@31 -- # create_subsystem 0 00:19:29.314 06:44:24 -- target/dif.sh@18 -- # local sub_id=0 00:19:29.314 06:44:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:29.314 06:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.314 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:29.314 bdev_null0 00:19:29.314 06:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.314 06:44:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:29.314 06:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.314 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:29.314 06:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.314 06:44:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:29.314 06:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.314 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:29.314 06:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.314 06:44:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:29.314 06:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.314 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:29.314 [2024-12-05 06:44:24.409763] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.314 06:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.314 06:44:24 -- target/dif.sh@30 -- # for sub in "$@" 00:19:29.314 06:44:24 -- target/dif.sh@31 -- # create_subsystem 1 00:19:29.314 06:44:24 -- target/dif.sh@18 -- # local sub_id=1 00:19:29.314 06:44:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:29.314 06:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.314 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:29.314 bdev_null1 00:19:29.314 06:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.314 06:44:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:29.314 06:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.314 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:29.314 06:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.314 06:44:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:29.314 06:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.314 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:29.314 06:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.314 06:44:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.314 06:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.314 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:29.314 06:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.314 06:44:24 -- target/dif.sh@30 -- # for sub in "$@" 00:19:29.314 06:44:24 -- target/dif.sh@31 -- # create_subsystem 2 00:19:29.314 06:44:24 -- target/dif.sh@18 -- # local sub_id=2 00:19:29.314 06:44:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:29.314 06:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.314 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:29.314 bdev_null2 00:19:29.314 06:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.314 06:44:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:29.314 06:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.314 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:29.314 06:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.314 06:44:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:29.314 06:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.314 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:29.314 06:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.314 06:44:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:29.314 06:44:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.314 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:19:29.314 06:44:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.314 06:44:24 -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:29.314 06:44:24 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:29.314 06:44:24 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:29.314 06:44:24 -- nvmf/common.sh@520 -- # config=() 00:19:29.314 06:44:24 -- nvmf/common.sh@520 -- # local subsystem config 00:19:29.314 06:44:24 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:29.314 06:44:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:29.314 06:44:24 -- target/dif.sh@82 -- # gen_fio_conf 00:19:29.314 06:44:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:29.314 { 00:19:29.314 "params": { 00:19:29.314 "name": "Nvme$subsystem", 00:19:29.314 "trtype": "$TEST_TRANSPORT", 00:19:29.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.314 "adrfam": "ipv4", 00:19:29.314 "trsvcid": "$NVMF_PORT", 00:19:29.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.314 "hdgst": ${hdgst:-false}, 00:19:29.314 "ddgst": ${ddgst:-false} 00:19:29.314 }, 00:19:29.314 "method": "bdev_nvme_attach_controller" 00:19:29.314 } 00:19:29.314 EOF 00:19:29.314 )") 00:19:29.314 06:44:24 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:29.314 06:44:24 -- target/dif.sh@54 -- # local file 00:19:29.314 06:44:24 -- target/dif.sh@56 -- # cat 00:19:29.314 06:44:24 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:29.314 06:44:24 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:29.314 06:44:24 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:29.314 06:44:24 -- nvmf/common.sh@542 -- # cat 00:19:29.314 06:44:24 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:29.315 06:44:24 -- common/autotest_common.sh@1330 -- # shift 00:19:29.315 06:44:24 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:29.315 06:44:24 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:29.315 06:44:24 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:29.315 06:44:24 -- target/dif.sh@72 -- # (( file <= files )) 00:19:29.315 06:44:24 -- target/dif.sh@73 -- # cat 00:19:29.315 06:44:24 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:29.315 06:44:24 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:29.315 06:44:24 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:29.315 06:44:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:29.315 06:44:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:29.315 { 00:19:29.315 "params": { 00:19:29.315 "name": "Nvme$subsystem", 00:19:29.315 "trtype": "$TEST_TRANSPORT", 00:19:29.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.315 "adrfam": "ipv4", 00:19:29.315 "trsvcid": "$NVMF_PORT", 00:19:29.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.315 "hdgst": ${hdgst:-false}, 00:19:29.315 "ddgst": ${ddgst:-false} 00:19:29.315 }, 00:19:29.315 "method": "bdev_nvme_attach_controller" 00:19:29.315 } 00:19:29.315 EOF 00:19:29.315 )") 00:19:29.315 06:44:24 -- target/dif.sh@72 -- # (( file++ )) 00:19:29.315 06:44:24 -- target/dif.sh@72 -- # (( file <= files )) 00:19:29.315 06:44:24 -- target/dif.sh@73 -- # cat 00:19:29.315 06:44:24 -- nvmf/common.sh@542 -- # cat 00:19:29.315 06:44:24 -- target/dif.sh@72 -- # (( file++ )) 00:19:29.315 06:44:24 -- target/dif.sh@72 -- # (( file <= files )) 00:19:29.315 06:44:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:29.315 06:44:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:29.315 { 00:19:29.315 "params": { 00:19:29.315 "name": "Nvme$subsystem", 00:19:29.315 "trtype": "$TEST_TRANSPORT", 00:19:29.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.315 "adrfam": "ipv4", 00:19:29.315 "trsvcid": "$NVMF_PORT", 00:19:29.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.315 "hdgst": ${hdgst:-false}, 00:19:29.315 "ddgst": ${ddgst:-false} 00:19:29.315 }, 00:19:29.315 "method": "bdev_nvme_attach_controller" 00:19:29.315 } 00:19:29.315 EOF 00:19:29.315 )") 00:19:29.315 06:44:24 -- nvmf/common.sh@542 -- # cat 00:19:29.315 06:44:24 -- nvmf/common.sh@544 -- # jq . 00:19:29.315 06:44:24 -- nvmf/common.sh@545 -- # IFS=, 00:19:29.315 06:44:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:29.315 "params": { 00:19:29.315 "name": "Nvme0", 00:19:29.315 "trtype": "tcp", 00:19:29.315 "traddr": "10.0.0.2", 00:19:29.315 "adrfam": "ipv4", 00:19:29.315 "trsvcid": "4420", 00:19:29.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:29.315 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:29.315 "hdgst": false, 00:19:29.315 "ddgst": false 00:19:29.315 }, 00:19:29.315 "method": "bdev_nvme_attach_controller" 00:19:29.315 },{ 00:19:29.315 "params": { 00:19:29.315 "name": "Nvme1", 00:19:29.315 "trtype": "tcp", 00:19:29.315 "traddr": "10.0.0.2", 00:19:29.315 "adrfam": "ipv4", 00:19:29.315 "trsvcid": "4420", 00:19:29.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.315 "hdgst": false, 00:19:29.315 "ddgst": false 00:19:29.315 }, 00:19:29.315 "method": "bdev_nvme_attach_controller" 00:19:29.315 },{ 00:19:29.315 "params": { 00:19:29.315 "name": "Nvme2", 00:19:29.315 "trtype": "tcp", 00:19:29.315 "traddr": "10.0.0.2", 00:19:29.315 "adrfam": "ipv4", 00:19:29.315 "trsvcid": "4420", 00:19:29.315 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:29.315 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:29.315 "hdgst": false, 00:19:29.315 "ddgst": false 00:19:29.315 }, 00:19:29.315 "method": "bdev_nvme_attach_controller" 00:19:29.315 }' 00:19:29.315 06:44:24 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:29.315 06:44:24 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:29.315 06:44:24 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:29.315 06:44:24 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:29.315 06:44:24 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:29.315 06:44:24 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:29.315 06:44:24 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:29.315 06:44:24 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:29.315 06:44:24 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:29.315 06:44:24 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:29.315 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:29.315 ... 00:19:29.315 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:29.315 ... 00:19:29.315 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:29.315 ... 00:19:29.315 fio-3.35 00:19:29.315 Starting 24 threads 00:19:29.883 [2024-12-05 06:44:25.143884] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:29.883 [2024-12-05 06:44:25.144484] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:51.876 00:19:51.876 filename0: (groupid=0, jobs=1): err= 0: pid=86501: Thu Dec 5 06:44:44 2024 00:19:51.876 read: IOPS=643, BW=2572KiB/s (2634kB/s)(25.2MiB/10017msec) 00:19:51.876 slat (usec): min=4, max=8026, avg=29.76, stdev=376.39 00:19:51.876 clat (usec): min=1636, max=134315, avg=24649.62, stdev=11555.54 00:19:51.876 lat (usec): min=1645, max=134330, avg=24679.38, stdev=11556.10 00:19:51.876 clat percentiles (msec): 00:19:51.876 | 1.00th=[ 8], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 16], 00:19:51.876 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:19:51.876 | 70.00th=[ 25], 80.00th=[ 28], 90.00th=[ 36], 95.00th=[ 36], 00:19:51.876 | 99.00th=[ 70], 99.50th=[ 83], 99.90th=[ 136], 99.95th=[ 136], 00:19:51.876 | 99.99th=[ 136] 00:19:51.876 bw ( KiB/s): min= 896, max= 3633, per=5.43%, avg=2575.25, stdev=573.68, samples=20 00:19:51.876 iops : min= 224, max= 908, avg=643.80, stdev=143.39, samples=20 00:19:51.876 lat (msec) : 2=0.12%, 4=0.56%, 10=1.89%, 20=20.69%, 50=74.50% 00:19:51.876 lat (msec) : 100=1.96%, 250=0.28% 00:19:51.876 cpu : usr=32.23%, sys=1.95%, ctx=899, majf=0, minf=9 00:19:51.876 IO depths : 1=1.6%, 2=7.0%, 4=22.4%, 8=57.9%, 16=11.0%, 32=0.0%, >=64=0.0% 00:19:51.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.876 complete : 0=0.0%, 4=93.7%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.876 issued rwts: total=6442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.876 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.876 filename0: (groupid=0, jobs=1): err= 0: pid=86502: Thu Dec 5 06:44:44 2024 00:19:51.876 read: IOPS=418, BW=1675KiB/s (1715kB/s)(16.4MiB/10005msec) 00:19:51.876 slat (usec): min=3, max=4037, avg=27.00, stdev=214.38 00:19:51.876 clat (msec): min=6, max=159, avg=38.09, stdev=16.46 00:19:51.876 lat (msec): min=6, max=159, avg=38.12, stdev=16.46 00:19:51.876 clat percentiles (msec): 00:19:51.876 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 18], 20.00th=[ 24], 00:19:51.876 | 30.00th=[ 26], 40.00th=[ 32], 50.00th=[ 37], 60.00th=[ 43], 00:19:51.876 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 56], 95.00th=[ 64], 00:19:51.876 | 99.00th=[ 89], 99.50th=[ 101], 99.90th=[ 128], 99.95th=[ 128], 00:19:51.876 | 99.99th=[ 161] 00:19:51.876 bw ( KiB/s): min= 752, max= 2904, per=3.52%, avg=1671.60, stdev=520.76, samples=20 00:19:51.876 iops : min= 188, max= 726, avg=417.90, stdev=130.19, samples=20 00:19:51.876 lat (msec) : 10=0.26%, 20=13.63%, 50=66.51%, 100=19.10%, 250=0.50% 00:19:51.876 cpu : usr=47.30%, sys=2.68%, ctx=1232, majf=0, minf=9 00:19:51.876 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:51.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.876 complete : 0=0.0%, 4=88.3%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.876 issued rwts: total=4189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.876 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.876 filename0: (groupid=0, jobs=1): err= 0: pid=86503: Thu Dec 5 06:44:44 2024 00:19:51.876 read: IOPS=418, BW=1675KiB/s (1715kB/s)(16.4MiB/10013msec) 00:19:51.876 slat (usec): min=4, max=5026, avg=25.62, stdev=219.10 00:19:51.876 clat (msec): min=12, max=108, avg=38.09, stdev=15.50 00:19:51.876 lat (msec): min=12, max=108, avg=38.12, stdev=15.50 00:19:51.876 clat percentiles (msec): 00:19:51.876 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 20], 20.00th=[ 23], 00:19:51.876 | 30.00th=[ 26], 40.00th=[ 32], 50.00th=[ 39], 60.00th=[ 45], 00:19:51.876 | 70.00th=[ 47], 80.00th=[ 52], 90.00th=[ 55], 95.00th=[ 62], 00:19:51.876 | 99.00th=[ 91], 99.50th=[ 93], 99.90th=[ 103], 99.95th=[ 107], 00:19:51.876 | 99.99th=[ 109] 00:19:51.876 bw ( KiB/s): min= 768, max= 2864, per=3.52%, avg=1672.40, stdev=528.14, samples=20 00:19:51.876 iops : min= 192, max= 716, avg=418.10, stdev=132.03, samples=20 00:19:51.876 lat (msec) : 20=10.90%, 50=66.36%, 100=22.59%, 250=0.14% 00:19:51.876 cpu : usr=46.90%, sys=2.58%, ctx=1523, majf=0, minf=9 00:19:51.876 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=80.9%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:51.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.876 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.876 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.876 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.876 filename0: (groupid=0, jobs=1): err= 0: pid=86504: Thu Dec 5 06:44:44 2024 00:19:51.876 read: IOPS=423, BW=1693KiB/s (1734kB/s)(16.6MiB/10022msec) 00:19:51.876 slat (usec): min=3, max=4033, avg=25.20, stdev=197.61 00:19:51.877 clat (msec): min=11, max=103, avg=37.66, stdev=15.09 00:19:51.877 lat (msec): min=11, max=103, avg=37.68, stdev=15.09 00:19:51.877 clat percentiles (msec): 00:19:51.877 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 18], 20.00th=[ 24], 00:19:51.877 | 30.00th=[ 25], 40.00th=[ 32], 50.00th=[ 38], 60.00th=[ 45], 00:19:51.877 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 56], 95.00th=[ 63], 00:19:51.877 | 99.00th=[ 85], 99.50th=[ 88], 99.90th=[ 93], 99.95th=[ 94], 00:19:51.877 | 99.99th=[ 105] 00:19:51.877 bw ( KiB/s): min= 880, max= 2760, per=3.55%, avg=1686.16, stdev=519.62, samples=19 00:19:51.877 iops : min= 220, max= 690, avg=421.53, stdev=129.92, samples=19 00:19:51.877 lat (msec) : 20=12.96%, 50=68.77%, 100=18.22%, 250=0.05% 00:19:51.877 cpu : usr=49.36%, sys=2.62%, ctx=1307, majf=0, minf=9 00:19:51.877 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.0%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:51.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.877 complete : 0=0.0%, 4=88.3%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.877 issued rwts: total=4243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.877 filename0: (groupid=0, jobs=1): err= 0: pid=86505: Thu Dec 5 06:44:44 2024 00:19:51.877 read: IOPS=634, BW=2538KiB/s (2599kB/s)(24.8MiB/10023msec) 00:19:51.877 slat (usec): min=6, max=8030, avg=20.27, stdev=246.01 00:19:51.877 clat (usec): min=1293, max=132048, avg=25045.52, stdev=11447.81 00:19:51.877 lat (usec): min=1302, max=132061, avg=25065.79, stdev=11447.68 00:19:51.877 clat percentiles (msec): 00:19:51.877 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 20], 00:19:51.877 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:19:51.877 | 70.00th=[ 25], 80.00th=[ 33], 90.00th=[ 36], 95.00th=[ 36], 00:19:51.877 | 99.00th=[ 72], 99.50th=[ 84], 99.90th=[ 132], 99.95th=[ 132], 00:19:51.877 | 99.99th=[ 132] 00:19:51.877 bw ( KiB/s): min= 768, max= 4136, per=5.34%, avg=2536.75, stdev=656.99, samples=20 00:19:51.877 iops : min= 192, max= 1034, avg=634.15, stdev=164.28, samples=20 00:19:51.877 lat (msec) : 2=0.17%, 4=0.53%, 10=1.65%, 20=17.67%, 50=77.64% 00:19:51.877 lat (msec) : 100=2.04%, 250=0.28% 00:19:51.877 cpu : usr=31.37%, sys=2.06%, ctx=894, majf=0, minf=9 00:19:51.877 IO depths : 1=1.8%, 2=7.0%, 4=22.1%, 8=57.9%, 16=11.2%, 32=0.0%, >=64=0.0% 00:19:51.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.877 complete : 0=0.0%, 4=93.7%, 8=1.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.877 issued rwts: total=6360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.877 filename0: (groupid=0, jobs=1): err= 0: pid=86506: Thu Dec 5 06:44:44 2024 00:19:51.877 read: IOPS=664, BW=2658KiB/s (2722kB/s)(26.0MiB/10025msec) 00:19:51.877 slat (usec): min=3, max=8037, avg=16.44, stdev=163.05 00:19:51.877 clat (usec): min=912, max=133442, avg=23929.06, stdev=11137.61 00:19:51.877 lat (usec): min=922, max=133456, avg=23945.51, stdev=11139.09 00:19:51.877 clat percentiles (msec): 00:19:51.877 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 17], 00:19:51.877 | 30.00th=[ 20], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:19:51.877 | 70.00th=[ 25], 80.00th=[ 29], 90.00th=[ 35], 95.00th=[ 38], 00:19:51.877 | 99.00th=[ 67], 99.50th=[ 80], 99.90th=[ 134], 99.95th=[ 134], 00:19:51.877 | 99.99th=[ 134] 00:19:51.877 bw ( KiB/s): min= 784, max= 3792, per=5.60%, avg=2657.45, stdev=618.04, samples=20 00:19:51.877 iops : min= 196, max= 948, avg=664.35, stdev=154.52, samples=20 00:19:51.877 lat (usec) : 1000=0.06% 00:19:51.877 lat (msec) : 2=0.12%, 4=0.51%, 10=3.45%, 20=27.02%, 50=66.35% 00:19:51.877 lat (msec) : 100=2.25%, 250=0.24% 00:19:51.877 cpu : usr=41.61%, sys=2.40%, ctx=1300, majf=0, minf=9 00:19:51.877 IO depths : 1=1.6%, 2=7.1%, 4=22.7%, 8=57.4%, 16=11.2%, 32=0.0%, >=64=0.0% 00:19:51.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.877 complete : 0=0.0%, 4=93.8%, 8=0.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.877 issued rwts: total=6662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.877 filename0: (groupid=0, jobs=1): err= 0: pid=86507: Thu Dec 5 06:44:44 2024 00:19:51.877 read: IOPS=417, BW=1671KiB/s (1711kB/s)(16.3MiB/10011msec) 00:19:51.877 slat (usec): min=4, max=8029, avg=24.63, stdev=223.42 00:19:51.877 clat (usec): min=13519, max=95970, avg=38185.39, stdev=14833.84 00:19:51.877 lat (usec): min=13534, max=95984, avg=38210.01, stdev=14835.94 00:19:51.877 clat percentiles (usec): 00:19:51.877 | 1.00th=[15008], 5.00th=[15926], 10.00th=[19268], 20.00th=[23987], 00:19:51.877 | 30.00th=[25560], 40.00th=[32113], 50.00th=[39584], 60.00th=[44827], 00:19:51.877 | 70.00th=[47973], 80.00th=[49021], 90.00th=[55837], 95.00th=[58983], 00:19:51.877 | 99.00th=[84411], 99.50th=[93848], 99.90th=[95945], 99.95th=[95945], 00:19:51.877 | 99.99th=[95945] 00:19:51.877 bw ( KiB/s): min= 976, max= 3086, per=3.54%, avg=1679.63, stdev=514.29, samples=19 00:19:51.877 iops : min= 244, max= 771, avg=419.79, stdev=128.54, samples=19 00:19:51.877 lat (msec) : 20=11.21%, 50=70.95%, 100=17.84% 00:19:51.877 cpu : usr=45.57%, sys=2.61%, ctx=1397, majf=0, minf=9 00:19:51.877 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.3%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:51.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.877 complete : 0=0.0%, 4=88.1%, 8=11.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.877 issued rwts: total=4182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.877 filename0: (groupid=0, jobs=1): err= 0: pid=86508: Thu Dec 5 06:44:44 2024 00:19:51.877 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.6MiB/10060msec) 00:19:51.877 slat (usec): min=3, max=8028, avg=20.75, stdev=216.43 00:19:51.877 clat (usec): min=1505, max=107006, avg=31966.34, stdev=14930.63 00:19:51.877 lat (usec): min=1516, max=107019, avg=31987.09, stdev=14934.72 00:19:51.877 clat percentiles (msec): 00:19:51.877 | 1.00th=[ 10], 5.00th=[ 15], 10.00th=[ 17], 20.00th=[ 22], 00:19:51.877 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 26], 60.00th=[ 33], 00:19:51.877 | 70.00th=[ 37], 80.00th=[ 47], 90.00th=[ 53], 95.00th=[ 56], 00:19:51.877 | 99.00th=[ 84], 99.50th=[ 92], 99.90th=[ 96], 99.95th=[ 100], 00:19:51.877 | 99.99th=[ 108] 00:19:51.877 bw ( KiB/s): min= 912, max= 3296, per=4.21%, avg=1996.55, stdev=677.91, samples=20 00:19:51.877 iops : min= 228, max= 824, avg=499.10, stdev=169.44, samples=20 00:19:51.877 lat (msec) : 2=0.10%, 10=1.22%, 20=16.84%, 50=69.49%, 100=12.31% 00:19:51.877 lat (msec) : 250=0.04% 00:19:51.877 cpu : usr=52.26%, sys=3.02%, ctx=1049, majf=0, minf=9 00:19:51.877 IO depths : 1=0.7%, 2=3.2%, 4=11.0%, 8=71.1%, 16=14.1%, 32=0.0%, >=64=0.0% 00:19:51.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.877 complete : 0=0.0%, 4=90.4%, 8=7.1%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.877 issued rwts: total=5005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.877 filename1: (groupid=0, jobs=1): err= 0: pid=86509: Thu Dec 5 06:44:44 2024 00:19:51.877 read: IOPS=642, BW=2571KiB/s (2633kB/s)(25.2MiB/10046msec) 00:19:51.877 slat (usec): min=4, max=8020, avg=20.35, stdev=187.36 00:19:51.877 clat (usec): min=1367, max=104173, avg=24735.08, stdev=11192.10 00:19:51.877 lat (usec): min=1376, max=104188, avg=24755.43, stdev=11195.10 00:19:51.877 clat percentiles (msec): 00:19:51.877 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 18], 00:19:51.877 | 30.00th=[ 21], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:19:51.877 | 70.00th=[ 26], 80.00th=[ 32], 90.00th=[ 36], 95.00th=[ 39], 00:19:51.877 | 99.00th=[ 72], 99.50th=[ 89], 99.90th=[ 101], 99.95th=[ 104], 00:19:51.877 | 99.99th=[ 105] 00:19:51.877 bw ( KiB/s): min= 1008, max= 3480, per=5.43%, avg=2575.90, stdev=620.33, samples=20 00:19:51.877 iops : min= 252, max= 870, avg=643.95, stdev=155.09, samples=20 00:19:51.878 lat (msec) : 2=0.37%, 4=0.25%, 10=1.95%, 20=27.02%, 50=67.46% 00:19:51.878 lat (msec) : 100=2.85%, 250=0.09% 00:19:51.878 cpu : usr=43.87%, sys=2.60%, ctx=1403, majf=0, minf=9 00:19:51.878 IO depths : 1=1.5%, 2=6.8%, 4=22.2%, 8=58.0%, 16=11.4%, 32=0.0%, >=64=0.0% 00:19:51.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.878 complete : 0=0.0%, 4=93.7%, 8=1.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.878 issued rwts: total=6457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.878 filename1: (groupid=0, jobs=1): err= 0: pid=86510: Thu Dec 5 06:44:44 2024 00:19:51.878 read: IOPS=427, BW=1709KiB/s (1750kB/s)(16.7MiB/10013msec) 00:19:51.878 slat (usec): min=3, max=4024, avg=19.28, stdev=150.19 00:19:51.878 clat (usec): min=12640, max=93879, avg=37358.12, stdev=14926.15 00:19:51.878 lat (usec): min=12657, max=93895, avg=37377.40, stdev=14927.50 00:19:51.878 clat percentiles (usec): 00:19:51.878 | 1.00th=[14877], 5.00th=[15926], 10.00th=[17695], 20.00th=[23725], 00:19:51.878 | 30.00th=[24773], 40.00th=[31851], 50.00th=[34866], 60.00th=[42730], 00:19:51.878 | 70.00th=[47973], 80.00th=[49021], 90.00th=[55313], 95.00th=[60556], 00:19:51.878 | 99.00th=[83362], 99.50th=[84411], 99.90th=[91751], 99.95th=[91751], 00:19:51.878 | 99.99th=[93848] 00:19:51.878 bw ( KiB/s): min= 880, max= 2912, per=3.58%, avg=1697.37, stdev=507.05, samples=19 00:19:51.878 iops : min= 220, max= 728, avg=424.32, stdev=126.80, samples=19 00:19:51.878 lat (msec) : 20=12.72%, 50=69.11%, 100=18.17% 00:19:51.878 cpu : usr=48.71%, sys=2.78%, ctx=1301, majf=0, minf=9 00:19:51.878 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.7%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:51.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.878 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.878 issued rwts: total=4277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.878 filename1: (groupid=0, jobs=1): err= 0: pid=86511: Thu Dec 5 06:44:44 2024 00:19:51.878 read: IOPS=440, BW=1760KiB/s (1803kB/s)(17.2MiB/10002msec) 00:19:51.878 slat (usec): min=3, max=4029, avg=22.80, stdev=170.89 00:19:51.878 clat (msec): min=5, max=151, avg=36.25, stdev=15.36 00:19:51.878 lat (msec): min=5, max=151, avg=36.27, stdev=15.36 00:19:51.878 clat percentiles (msec): 00:19:51.878 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 24], 00:19:51.878 | 30.00th=[ 25], 40.00th=[ 32], 50.00th=[ 34], 60.00th=[ 41], 00:19:51.878 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 55], 95.00th=[ 58], 00:19:51.878 | 99.00th=[ 83], 99.50th=[ 89], 99.90th=[ 116], 99.95th=[ 116], 00:19:51.878 | 99.99th=[ 153] 00:19:51.878 bw ( KiB/s): min= 880, max= 2920, per=3.66%, avg=1737.32, stdev=533.34, samples=19 00:19:51.878 iops : min= 220, max= 730, avg=434.32, stdev=133.35, samples=19 00:19:51.878 lat (msec) : 10=0.50%, 20=15.45%, 50=68.17%, 100=15.52%, 250=0.36% 00:19:51.878 cpu : usr=47.99%, sys=2.37%, ctx=1341, majf=0, minf=9 00:19:51.878 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:51.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.878 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.878 issued rwts: total=4402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.878 filename1: (groupid=0, jobs=1): err= 0: pid=86512: Thu Dec 5 06:44:44 2024 00:19:51.878 read: IOPS=443, BW=1775KiB/s (1817kB/s)(17.3MiB/10003msec) 00:19:51.878 slat (usec): min=3, max=6027, avg=23.99, stdev=191.00 00:19:51.878 clat (msec): min=9, max=167, avg=35.97, stdev=16.18 00:19:51.878 lat (msec): min=9, max=167, avg=35.99, stdev=16.18 00:19:51.878 clat percentiles (msec): 00:19:51.878 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 23], 00:19:51.878 | 30.00th=[ 24], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 41], 00:19:51.878 | 70.00th=[ 47], 80.00th=[ 48], 90.00th=[ 55], 95.00th=[ 58], 00:19:51.878 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 132], 99.95th=[ 132], 00:19:51.878 | 99.99th=[ 167] 00:19:51.878 bw ( KiB/s): min= 768, max= 2968, per=3.70%, avg=1755.79, stdev=564.84, samples=19 00:19:51.878 iops : min= 192, max= 742, avg=438.95, stdev=141.21, samples=19 00:19:51.878 lat (msec) : 10=0.14%, 20=16.83%, 50=68.43%, 100=14.20%, 250=0.41% 00:19:51.878 cpu : usr=46.92%, sys=2.41%, ctx=1373, majf=0, minf=9 00:19:51.878 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:51.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.878 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.878 issued rwts: total=4438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.878 filename1: (groupid=0, jobs=1): err= 0: pid=86513: Thu Dec 5 06:44:44 2024 00:19:51.878 read: IOPS=428, BW=1715KiB/s (1756kB/s)(16.8MiB/10010msec) 00:19:51.878 slat (usec): min=3, max=4031, avg=25.37, stdev=195.82 00:19:51.878 clat (msec): min=10, max=131, avg=37.21, stdev=15.81 00:19:51.878 lat (msec): min=10, max=131, avg=37.23, stdev=15.82 00:19:51.878 clat percentiles (msec): 00:19:51.878 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 24], 00:19:51.878 | 30.00th=[ 26], 40.00th=[ 32], 50.00th=[ 34], 60.00th=[ 43], 00:19:51.878 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 55], 95.00th=[ 61], 00:19:51.878 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 97], 99.95th=[ 97], 00:19:51.878 | 99.99th=[ 132] 00:19:51.878 bw ( KiB/s): min= 768, max= 2920, per=3.60%, avg=1710.40, stdev=539.14, samples=20 00:19:51.878 iops : min= 192, max= 730, avg=427.60, stdev=134.78, samples=20 00:19:51.878 lat (msec) : 20=14.47%, 50=69.55%, 100=15.94%, 250=0.05% 00:19:51.878 cpu : usr=46.48%, sys=2.36%, ctx=1392, majf=0, minf=9 00:19:51.878 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:51.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.878 complete : 0=0.0%, 4=88.1%, 8=11.0%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.878 issued rwts: total=4292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.878 filename1: (groupid=0, jobs=1): err= 0: pid=86514: Thu Dec 5 06:44:44 2024 00:19:51.878 read: IOPS=422, BW=1688KiB/s (1729kB/s)(16.5MiB/10018msec) 00:19:51.878 slat (usec): min=4, max=5024, avg=22.39, stdev=190.58 00:19:51.878 clat (msec): min=12, max=104, avg=37.82, stdev=15.15 00:19:51.878 lat (msec): min=12, max=104, avg=37.84, stdev=15.15 00:19:51.878 clat percentiles (msec): 00:19:51.878 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 24], 00:19:51.878 | 30.00th=[ 26], 40.00th=[ 32], 50.00th=[ 40], 60.00th=[ 46], 00:19:51.878 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 56], 95.00th=[ 61], 00:19:51.878 | 99.00th=[ 81], 99.50th=[ 86], 99.90th=[ 93], 99.95th=[ 105], 00:19:51.878 | 99.99th=[ 105] 00:19:51.878 bw ( KiB/s): min= 1142, max= 3142, per=3.55%, avg=1686.32, stdev=537.18, samples=19 00:19:51.878 iops : min= 285, max= 785, avg=421.47, stdev=134.28, samples=19 00:19:51.878 lat (msec) : 20=13.25%, 50=68.52%, 100=18.16%, 250=0.07% 00:19:51.878 cpu : usr=55.83%, sys=3.06%, ctx=1105, majf=0, minf=9 00:19:51.878 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.5%, 16=16.9%, 32=0.0%, >=64=0.0% 00:19:51.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.878 complete : 0=0.0%, 4=87.8%, 8=12.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.878 issued rwts: total=4228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.878 filename1: (groupid=0, jobs=1): err= 0: pid=86515: Thu Dec 5 06:44:44 2024 00:19:51.878 read: IOPS=441, BW=1766KiB/s (1809kB/s)(17.3MiB/10005msec) 00:19:51.878 slat (usec): min=3, max=4026, avg=22.80, stdev=161.05 00:19:51.879 clat (msec): min=4, max=154, avg=36.13, stdev=15.45 00:19:51.879 lat (msec): min=4, max=154, avg=36.15, stdev=15.46 00:19:51.879 clat percentiles (msec): 00:19:51.879 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 23], 00:19:51.879 | 30.00th=[ 26], 40.00th=[ 31], 50.00th=[ 35], 60.00th=[ 40], 00:19:51.879 | 70.00th=[ 46], 80.00th=[ 50], 90.00th=[ 54], 95.00th=[ 59], 00:19:51.879 | 99.00th=[ 85], 99.50th=[ 90], 99.90th=[ 121], 99.95th=[ 121], 00:19:51.879 | 99.99th=[ 155] 00:19:51.879 bw ( KiB/s): min= 880, max= 2880, per=3.67%, avg=1741.05, stdev=513.96, samples=19 00:19:51.879 iops : min= 220, max= 720, avg=435.26, stdev=128.49, samples=19 00:19:51.879 lat (msec) : 10=0.72%, 20=13.13%, 50=67.36%, 100=18.42%, 250=0.36% 00:19:51.879 cpu : usr=47.39%, sys=2.17%, ctx=1544, majf=0, minf=9 00:19:51.879 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.5%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:51.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.879 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.879 issued rwts: total=4418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.879 filename1: (groupid=0, jobs=1): err= 0: pid=86516: Thu Dec 5 06:44:44 2024 00:19:51.879 read: IOPS=455, BW=1821KiB/s (1865kB/s)(17.8MiB/10003msec) 00:19:51.879 slat (usec): min=3, max=4029, avg=20.15, stdev=126.98 00:19:51.879 clat (msec): min=3, max=155, avg=35.06, stdev=15.42 00:19:51.879 lat (msec): min=3, max=155, avg=35.08, stdev=15.42 00:19:51.879 clat percentiles (msec): 00:19:51.879 | 1.00th=[ 12], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 23], 00:19:51.879 | 30.00th=[ 24], 40.00th=[ 31], 50.00th=[ 33], 60.00th=[ 39], 00:19:51.879 | 70.00th=[ 45], 80.00th=[ 48], 90.00th=[ 54], 95.00th=[ 56], 00:19:51.879 | 99.00th=[ 85], 99.50th=[ 96], 99.90th=[ 121], 99.95th=[ 121], 00:19:51.879 | 99.99th=[ 157] 00:19:51.879 bw ( KiB/s): min= 848, max= 3048, per=3.78%, avg=1795.53, stdev=556.84, samples=19 00:19:51.879 iops : min= 212, max= 762, avg=448.84, stdev=139.25, samples=19 00:19:51.879 lat (msec) : 4=0.04%, 10=0.66%, 20=16.84%, 50=69.41%, 100=12.69% 00:19:51.879 lat (msec) : 250=0.35% 00:19:51.879 cpu : usr=57.47%, sys=2.88%, ctx=1024, majf=0, minf=9 00:19:51.879 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=83.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:51.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.879 complete : 0=0.0%, 4=86.9%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.879 issued rwts: total=4554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.879 filename2: (groupid=0, jobs=1): err= 0: pid=86517: Thu Dec 5 06:44:44 2024 00:19:51.879 read: IOPS=649, BW=2599KiB/s (2661kB/s)(25.5MiB/10041msec) 00:19:51.879 slat (usec): min=5, max=9022, avg=18.14, stdev=217.21 00:19:51.879 clat (usec): min=1213, max=120001, avg=24448.58, stdev=10331.75 00:19:51.879 lat (usec): min=1229, max=120009, avg=24466.73, stdev=10332.13 00:19:51.879 clat percentiles (msec): 00:19:51.879 | 1.00th=[ 7], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 17], 00:19:51.879 | 30.00th=[ 21], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:19:51.879 | 70.00th=[ 26], 80.00th=[ 31], 90.00th=[ 36], 95.00th=[ 39], 00:19:51.879 | 99.00th=[ 70], 99.50th=[ 80], 99.90th=[ 90], 99.95th=[ 94], 00:19:51.879 | 99.99th=[ 121] 00:19:51.879 bw ( KiB/s): min= 1040, max= 3681, per=5.48%, avg=2602.45, stdev=603.12, samples=20 00:19:51.879 iops : min= 260, max= 920, avg=650.60, stdev=150.76, samples=20 00:19:51.879 lat (msec) : 2=0.18%, 4=0.37%, 10=1.76%, 20=25.15%, 50=69.99% 00:19:51.879 lat (msec) : 100=2.53%, 250=0.02% 00:19:51.879 cpu : usr=36.06%, sys=2.08%, ctx=1153, majf=0, minf=9 00:19:51.879 IO depths : 1=1.8%, 2=6.9%, 4=21.4%, 8=58.7%, 16=11.2%, 32=0.0%, >=64=0.0% 00:19:51.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.879 complete : 0=0.0%, 4=93.4%, 8=1.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.879 issued rwts: total=6524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.879 filename2: (groupid=0, jobs=1): err= 0: pid=86518: Thu Dec 5 06:44:44 2024 00:19:51.879 read: IOPS=628, BW=2514KiB/s (2575kB/s)(24.7MiB/10049msec) 00:19:51.879 slat (usec): min=3, max=8024, avg=19.89, stdev=213.74 00:19:51.879 clat (usec): min=1683, max=122409, avg=25270.41, stdev=10608.67 00:19:51.879 lat (usec): min=1692, max=122423, avg=25290.30, stdev=10612.11 00:19:51.879 clat percentiles (msec): 00:19:51.879 | 1.00th=[ 10], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 18], 00:19:51.879 | 30.00th=[ 22], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:19:51.879 | 70.00th=[ 26], 80.00th=[ 33], 90.00th=[ 36], 95.00th=[ 40], 00:19:51.879 | 99.00th=[ 66], 99.50th=[ 74], 99.90th=[ 100], 99.95th=[ 106], 00:19:51.879 | 99.99th=[ 123] 00:19:51.879 bw ( KiB/s): min= 1088, max= 3320, per=5.31%, avg=2520.40, stdev=547.64, samples=20 00:19:51.879 iops : min= 272, max= 830, avg=630.10, stdev=136.91, samples=20 00:19:51.879 lat (msec) : 2=0.28%, 4=0.19%, 10=0.84%, 20=22.57%, 50=73.58% 00:19:51.879 lat (msec) : 100=2.44%, 250=0.09% 00:19:51.879 cpu : usr=34.01%, sys=2.18%, ctx=947, majf=0, minf=9 00:19:51.879 IO depths : 1=1.7%, 2=6.7%, 4=20.9%, 8=59.1%, 16=11.5%, 32=0.0%, >=64=0.0% 00:19:51.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.879 complete : 0=0.0%, 4=93.4%, 8=1.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.879 issued rwts: total=6317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.879 filename2: (groupid=0, jobs=1): err= 0: pid=86519: Thu Dec 5 06:44:44 2024 00:19:51.879 read: IOPS=440, BW=1763KiB/s (1805kB/s)(17.2MiB/10010msec) 00:19:51.879 slat (usec): min=3, max=8025, avg=22.65, stdev=186.60 00:19:51.879 clat (msec): min=11, max=132, avg=36.21, stdev=14.77 00:19:51.879 lat (msec): min=11, max=132, avg=36.23, stdev=14.77 00:19:51.879 clat percentiles (msec): 00:19:51.879 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 24], 00:19:51.879 | 30.00th=[ 25], 40.00th=[ 32], 50.00th=[ 34], 60.00th=[ 41], 00:19:51.879 | 70.00th=[ 47], 80.00th=[ 49], 90.00th=[ 55], 95.00th=[ 57], 00:19:51.879 | 99.00th=[ 85], 99.50th=[ 86], 99.90th=[ 97], 99.95th=[ 100], 00:19:51.879 | 99.99th=[ 133] 00:19:51.879 bw ( KiB/s): min= 784, max= 2864, per=3.71%, avg=1760.80, stdev=513.47, samples=20 00:19:51.879 iops : min= 196, max= 716, avg=440.20, stdev=128.37, samples=20 00:19:51.879 lat (msec) : 20=14.26%, 50=70.81%, 100=14.89%, 250=0.05% 00:19:51.879 cpu : usr=46.10%, sys=2.27%, ctx=1374, majf=0, minf=9 00:19:51.879 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:51.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.879 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.879 issued rwts: total=4412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.879 filename2: (groupid=0, jobs=1): err= 0: pid=86520: Thu Dec 5 06:44:44 2024 00:19:51.879 read: IOPS=657, BW=2631KiB/s (2694kB/s)(25.8MiB/10039msec) 00:19:51.879 slat (usec): min=8, max=9022, avg=20.32, stdev=206.51 00:19:51.879 clat (usec): min=1014, max=141859, avg=24150.28, stdev=11243.49 00:19:51.879 lat (usec): min=1024, max=141873, avg=24170.60, stdev=11243.48 00:19:51.879 clat percentiles (msec): 00:19:51.879 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 16], 20.00th=[ 17], 00:19:51.879 | 30.00th=[ 20], 40.00th=[ 22], 50.00th=[ 24], 60.00th=[ 24], 00:19:51.879 | 70.00th=[ 26], 80.00th=[ 31], 90.00th=[ 34], 95.00th=[ 39], 00:19:51.879 | 99.00th=[ 65], 99.50th=[ 81], 99.90th=[ 142], 99.95th=[ 142], 00:19:51.879 | 99.99th=[ 142] 00:19:51.879 bw ( KiB/s): min= 928, max= 3712, per=5.55%, avg=2635.20, stdev=600.53, samples=20 00:19:51.880 iops : min= 232, max= 928, avg=658.80, stdev=150.13, samples=20 00:19:51.880 lat (msec) : 2=0.33%, 4=0.33%, 10=3.06%, 20=28.73%, 50=64.93% 00:19:51.880 lat (msec) : 100=2.39%, 250=0.21% 00:19:51.880 cpu : usr=43.60%, sys=2.80%, ctx=1356, majf=0, minf=9 00:19:51.880 IO depths : 1=1.6%, 2=7.5%, 4=23.9%, 8=56.0%, 16=10.9%, 32=0.0%, >=64=0.0% 00:19:51.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.880 complete : 0=0.0%, 4=94.1%, 8=0.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.880 issued rwts: total=6602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.880 filename2: (groupid=0, jobs=1): err= 0: pid=86521: Thu Dec 5 06:44:44 2024 00:19:51.880 read: IOPS=416, BW=1666KiB/s (1706kB/s)(16.3MiB/10010msec) 00:19:51.880 slat (usec): min=4, max=7021, avg=19.10, stdev=150.87 00:19:51.880 clat (msec): min=10, max=133, avg=38.33, stdev=15.54 00:19:51.880 lat (msec): min=10, max=133, avg=38.35, stdev=15.54 00:19:51.880 clat percentiles (msec): 00:19:51.880 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 20], 20.00th=[ 24], 00:19:51.880 | 30.00th=[ 27], 40.00th=[ 32], 50.00th=[ 39], 60.00th=[ 45], 00:19:51.880 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 56], 95.00th=[ 63], 00:19:51.880 | 99.00th=[ 85], 99.50th=[ 90], 99.90th=[ 97], 99.95th=[ 104], 00:19:51.880 | 99.99th=[ 134] 00:19:51.880 bw ( KiB/s): min= 880, max= 2856, per=3.50%, avg=1661.20, stdev=511.30, samples=20 00:19:51.880 iops : min= 220, max= 714, avg=415.30, stdev=127.82, samples=20 00:19:51.880 lat (msec) : 20=12.35%, 50=68.75%, 100=18.81%, 250=0.10% 00:19:51.880 cpu : usr=47.05%, sys=2.35%, ctx=1339, majf=0, minf=9 00:19:51.880 IO depths : 1=0.1%, 2=0.7%, 4=3.0%, 8=79.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:51.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.880 complete : 0=0.0%, 4=88.4%, 8=10.9%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.880 issued rwts: total=4169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.880 filename2: (groupid=0, jobs=1): err= 0: pid=86522: Thu Dec 5 06:44:44 2024 00:19:51.880 read: IOPS=423, BW=1694KiB/s (1734kB/s)(16.5MiB/10001msec) 00:19:51.880 slat (usec): min=3, max=4033, avg=21.57, stdev=163.02 00:19:51.880 clat (usec): min=1404, max=162843, avg=37691.24, stdev=17432.80 00:19:51.880 lat (usec): min=1412, max=162853, avg=37712.81, stdev=17431.50 00:19:51.880 clat percentiles (msec): 00:19:51.880 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 24], 00:19:51.880 | 30.00th=[ 26], 40.00th=[ 32], 50.00th=[ 35], 60.00th=[ 43], 00:19:51.880 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 56], 95.00th=[ 64], 00:19:51.880 | 99.00th=[ 85], 99.50th=[ 133], 99.90th=[ 148], 99.95th=[ 148], 00:19:51.880 | 99.99th=[ 163] 00:19:51.880 bw ( KiB/s): min= 752, max= 2856, per=3.47%, avg=1648.84, stdev=542.11, samples=19 00:19:51.880 iops : min= 188, max= 714, avg=412.21, stdev=135.53, samples=19 00:19:51.880 lat (msec) : 2=0.76%, 4=0.09%, 10=0.52%, 20=12.70%, 50=67.70% 00:19:51.880 lat (msec) : 100=17.52%, 250=0.71% 00:19:51.880 cpu : usr=46.22%, sys=2.32%, ctx=1292, majf=0, minf=9 00:19:51.880 IO depths : 1=0.1%, 2=0.7%, 4=3.1%, 8=80.0%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:51.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.880 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.880 issued rwts: total=4235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.880 filename2: (groupid=0, jobs=1): err= 0: pid=86523: Thu Dec 5 06:44:44 2024 00:19:51.880 read: IOPS=439, BW=1756KiB/s (1798kB/s)(17.2MiB/10005msec) 00:19:51.880 slat (usec): min=6, max=5024, avg=28.27, stdev=227.12 00:19:51.880 clat (msec): min=6, max=165, avg=36.32, stdev=15.79 00:19:51.880 lat (msec): min=6, max=165, avg=36.35, stdev=15.79 00:19:51.880 clat percentiles (msec): 00:19:51.880 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 24], 00:19:51.880 | 30.00th=[ 25], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 41], 00:19:51.880 | 70.00th=[ 47], 80.00th=[ 48], 90.00th=[ 55], 95.00th=[ 56], 00:19:51.880 | 99.00th=[ 86], 99.50th=[ 96], 99.90th=[ 130], 99.95th=[ 130], 00:19:51.880 | 99.99th=[ 165] 00:19:51.880 bw ( KiB/s): min= 800, max= 2968, per=3.69%, avg=1753.20, stdev=519.62, samples=20 00:19:51.880 iops : min= 200, max= 742, avg=438.30, stdev=129.91, samples=20 00:19:51.880 lat (msec) : 10=0.14%, 20=15.14%, 50=69.22%, 100=15.14%, 250=0.36% 00:19:51.880 cpu : usr=46.19%, sys=2.37%, ctx=1361, majf=0, minf=10 00:19:51.880 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:51.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.880 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.880 issued rwts: total=4393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.880 filename2: (groupid=0, jobs=1): err= 0: pid=86524: Thu Dec 5 06:44:44 2024 00:19:51.880 read: IOPS=435, BW=1742KiB/s (1784kB/s)(17.0MiB/10007msec) 00:19:51.880 slat (usec): min=4, max=8028, avg=22.22, stdev=175.10 00:19:51.880 clat (msec): min=10, max=131, avg=36.64, stdev=15.07 00:19:51.880 lat (msec): min=10, max=131, avg=36.66, stdev=15.08 00:19:51.880 clat percentiles (msec): 00:19:51.880 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 24], 00:19:51.880 | 30.00th=[ 26], 40.00th=[ 32], 50.00th=[ 34], 60.00th=[ 41], 00:19:51.880 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 55], 95.00th=[ 58], 00:19:51.880 | 99.00th=[ 85], 99.50th=[ 93], 99.90th=[ 96], 99.95th=[ 100], 00:19:51.880 | 99.99th=[ 132] 00:19:51.880 bw ( KiB/s): min= 880, max= 2944, per=3.66%, avg=1738.00, stdev=518.39, samples=20 00:19:51.880 iops : min= 220, max= 736, avg=434.50, stdev=129.60, samples=20 00:19:51.880 lat (msec) : 20=14.09%, 50=69.92%, 100=15.95%, 250=0.05% 00:19:51.880 cpu : usr=51.20%, sys=2.62%, ctx=1550, majf=0, minf=9 00:19:51.880 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:51.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.880 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.880 issued rwts: total=4358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:51.880 00:19:51.880 Run status group 0 (all jobs): 00:19:51.880 READ: bw=46.3MiB/s (48.6MB/s), 1666KiB/s-2658KiB/s (1706kB/s-2722kB/s), io=466MiB (489MB), run=10001-10060msec 00:19:51.880 06:44:44 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:51.880 06:44:44 -- target/dif.sh@43 -- # local sub 00:19:51.880 06:44:44 -- target/dif.sh@45 -- # for sub in "$@" 00:19:51.880 06:44:44 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:51.880 06:44:44 -- target/dif.sh@36 -- # local sub_id=0 00:19:51.880 06:44:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:51.880 06:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.880 06:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:51.880 06:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.880 06:44:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:51.880 06:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.880 06:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:51.880 06:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.880 06:44:44 -- target/dif.sh@45 -- # for sub in "$@" 00:19:51.880 06:44:44 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:51.880 06:44:44 -- target/dif.sh@36 -- # local sub_id=1 00:19:51.880 06:44:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:51.880 06:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.880 06:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:51.881 06:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.881 06:44:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:51.881 06:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.881 06:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:51.881 06:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.881 06:44:44 -- target/dif.sh@45 -- # for sub in "$@" 00:19:51.881 06:44:44 -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:51.881 06:44:44 -- target/dif.sh@36 -- # local sub_id=2 00:19:51.881 06:44:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:51.881 06:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.881 06:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:51.881 06:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.881 06:44:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:51.881 06:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.881 06:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:51.881 06:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.881 06:44:44 -- target/dif.sh@115 -- # NULL_DIF=1 00:19:51.881 06:44:44 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:51.881 06:44:44 -- target/dif.sh@115 -- # numjobs=2 00:19:51.881 06:44:44 -- target/dif.sh@115 -- # iodepth=8 00:19:51.881 06:44:44 -- target/dif.sh@115 -- # runtime=5 00:19:51.881 06:44:44 -- target/dif.sh@115 -- # files=1 00:19:51.881 06:44:44 -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:51.881 06:44:44 -- target/dif.sh@28 -- # local sub 00:19:51.881 06:44:44 -- target/dif.sh@30 -- # for sub in "$@" 00:19:51.881 06:44:44 -- target/dif.sh@31 -- # create_subsystem 0 00:19:51.881 06:44:44 -- target/dif.sh@18 -- # local sub_id=0 00:19:51.881 06:44:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:51.881 06:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.881 06:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:51.881 bdev_null0 00:19:51.881 06:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.881 06:44:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:51.881 06:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.881 06:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:51.881 06:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.881 06:44:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:51.881 06:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.881 06:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:51.881 06:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.881 06:44:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:51.881 06:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.881 06:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:51.881 [2024-12-05 06:44:44.768515] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.881 06:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.881 06:44:44 -- target/dif.sh@30 -- # for sub in "$@" 00:19:51.881 06:44:44 -- target/dif.sh@31 -- # create_subsystem 1 00:19:51.881 06:44:44 -- target/dif.sh@18 -- # local sub_id=1 00:19:51.881 06:44:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:51.881 06:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.881 06:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:51.881 bdev_null1 00:19:51.881 06:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.881 06:44:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:51.881 06:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.881 06:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:51.881 06:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.881 06:44:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:51.881 06:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.881 06:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:51.881 06:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.881 06:44:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.881 06:44:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.881 06:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:51.881 06:44:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.881 06:44:44 -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:51.881 06:44:44 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:51.881 06:44:44 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:51.881 06:44:44 -- nvmf/common.sh@520 -- # config=() 00:19:51.881 06:44:44 -- nvmf/common.sh@520 -- # local subsystem config 00:19:51.881 06:44:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:51.881 06:44:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:51.881 { 00:19:51.881 "params": { 00:19:51.881 "name": "Nvme$subsystem", 00:19:51.881 "trtype": "$TEST_TRANSPORT", 00:19:51.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.881 "adrfam": "ipv4", 00:19:51.881 "trsvcid": "$NVMF_PORT", 00:19:51.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.881 "hdgst": ${hdgst:-false}, 00:19:51.881 "ddgst": ${ddgst:-false} 00:19:51.881 }, 00:19:51.881 "method": "bdev_nvme_attach_controller" 00:19:51.881 } 00:19:51.881 EOF 00:19:51.881 )") 00:19:51.881 06:44:44 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.881 06:44:44 -- target/dif.sh@82 -- # gen_fio_conf 00:19:51.881 06:44:44 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.881 06:44:44 -- target/dif.sh@54 -- # local file 00:19:51.881 06:44:44 -- target/dif.sh@56 -- # cat 00:19:51.881 06:44:44 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:51.881 06:44:44 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:51.881 06:44:44 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:51.881 06:44:44 -- nvmf/common.sh@542 -- # cat 00:19:51.881 06:44:44 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.881 06:44:44 -- common/autotest_common.sh@1330 -- # shift 00:19:51.881 06:44:44 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:51.881 06:44:44 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:51.881 06:44:44 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:51.881 06:44:44 -- target/dif.sh@72 -- # (( file <= files )) 00:19:51.881 06:44:44 -- target/dif.sh@73 -- # cat 00:19:51.881 06:44:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:51.881 06:44:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:51.881 { 00:19:51.881 "params": { 00:19:51.881 "name": "Nvme$subsystem", 00:19:51.881 "trtype": "$TEST_TRANSPORT", 00:19:51.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.881 "adrfam": "ipv4", 00:19:51.881 "trsvcid": "$NVMF_PORT", 00:19:51.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.881 "hdgst": ${hdgst:-false}, 00:19:51.881 "ddgst": ${ddgst:-false} 00:19:51.881 }, 00:19:51.881 "method": "bdev_nvme_attach_controller" 00:19:51.881 } 00:19:51.881 EOF 00:19:51.881 )") 00:19:51.881 06:44:44 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:51.881 06:44:44 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:51.881 06:44:44 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.881 06:44:44 -- nvmf/common.sh@542 -- # cat 00:19:51.881 06:44:44 -- target/dif.sh@72 -- # (( file++ )) 00:19:51.881 06:44:44 -- target/dif.sh@72 -- # (( file <= files )) 00:19:51.881 06:44:44 -- nvmf/common.sh@544 -- # jq . 00:19:51.881 06:44:44 -- nvmf/common.sh@545 -- # IFS=, 00:19:51.882 06:44:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:51.882 "params": { 00:19:51.882 "name": "Nvme0", 00:19:51.882 "trtype": "tcp", 00:19:51.882 "traddr": "10.0.0.2", 00:19:51.882 "adrfam": "ipv4", 00:19:51.882 "trsvcid": "4420", 00:19:51.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:51.882 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:51.882 "hdgst": false, 00:19:51.882 "ddgst": false 00:19:51.882 }, 00:19:51.882 "method": "bdev_nvme_attach_controller" 00:19:51.882 },{ 00:19:51.882 "params": { 00:19:51.882 "name": "Nvme1", 00:19:51.882 "trtype": "tcp", 00:19:51.882 "traddr": "10.0.0.2", 00:19:51.882 "adrfam": "ipv4", 00:19:51.882 "trsvcid": "4420", 00:19:51.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.882 "hdgst": false, 00:19:51.882 "ddgst": false 00:19:51.882 }, 00:19:51.882 "method": "bdev_nvme_attach_controller" 00:19:51.882 }' 00:19:51.882 06:44:44 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:51.882 06:44:44 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:51.882 06:44:44 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:51.882 06:44:44 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.882 06:44:44 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:51.882 06:44:44 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:51.882 06:44:44 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:51.882 06:44:44 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:51.882 06:44:44 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:51.882 06:44:44 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.882 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:51.882 ... 00:19:51.882 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:51.882 ... 00:19:51.882 fio-3.35 00:19:51.882 Starting 4 threads 00:19:51.882 [2024-12-05 06:44:45.384312] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:51.882 [2024-12-05 06:44:45.384398] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:55.171 00:19:55.171 filename0: (groupid=0, jobs=1): err= 0: pid=86754: Thu Dec 5 06:44:50 2024 00:19:55.171 read: IOPS=2369, BW=18.5MiB/s (19.4MB/s)(92.6MiB/5003msec) 00:19:55.171 slat (nsec): min=7119, max=59129, avg=13410.45, stdev=5301.10 00:19:55.171 clat (usec): min=1248, max=5246, avg=3343.41, stdev=1029.48 00:19:55.171 lat (usec): min=1257, max=5260, avg=3356.82, stdev=1028.33 00:19:55.171 clat percentiles (usec): 00:19:55.171 | 1.00th=[ 1975], 5.00th=[ 2089], 10.00th=[ 2147], 20.00th=[ 2278], 00:19:55.171 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2933], 60.00th=[ 4146], 00:19:55.171 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4752], 00:19:55.171 | 99.00th=[ 4948], 99.50th=[ 5014], 99.90th=[ 5080], 99.95th=[ 5145], 00:19:55.171 | 99.99th=[ 5211] 00:19:55.171 bw ( KiB/s): min=18128, max=19536, per=28.18%, avg=18959.00, stdev=595.58, samples=10 00:19:55.171 iops : min= 2266, max= 2442, avg=2369.80, stdev=74.41, samples=10 00:19:55.171 lat (msec) : 2=1.26%, 4=55.26%, 10=43.48% 00:19:55.171 cpu : usr=91.24%, sys=7.74%, ctx=9, majf=0, minf=0 00:19:55.171 IO depths : 1=0.1%, 2=0.1%, 4=63.6%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.171 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.171 issued rwts: total=11855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.171 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:55.171 filename0: (groupid=0, jobs=1): err= 0: pid=86755: Thu Dec 5 06:44:50 2024 00:19:55.171 read: IOPS=1895, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5002msec) 00:19:55.171 slat (usec): min=7, max=589, avg=14.91, stdev= 7.75 00:19:55.171 clat (usec): min=1080, max=6872, avg=4168.46, stdev=811.40 00:19:55.171 lat (usec): min=1094, max=6887, avg=4183.37, stdev=810.55 00:19:55.171 clat percentiles (usec): 00:19:55.171 | 1.00th=[ 2212], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 4228], 00:19:55.171 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4490], 00:19:55.171 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4883], 00:19:55.171 | 99.00th=[ 5080], 99.50th=[ 5276], 99.90th=[ 6063], 99.95th=[ 6521], 00:19:55.171 | 99.99th=[ 6849] 00:19:55.171 bw ( KiB/s): min=13440, max=18080, per=22.05%, avg=14830.22, stdev=1662.55, samples=9 00:19:55.171 iops : min= 1680, max= 2260, avg=1853.78, stdev=207.82, samples=9 00:19:55.171 lat (msec) : 2=0.26%, 4=18.36%, 10=81.38% 00:19:55.171 cpu : usr=92.18%, sys=6.76%, ctx=48, majf=0, minf=9 00:19:55.171 IO depths : 1=0.1%, 2=16.6%, 4=54.6%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.171 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.171 issued rwts: total=9483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.171 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:55.171 filename1: (groupid=0, jobs=1): err= 0: pid=86756: Thu Dec 5 06:44:50 2024 00:19:55.171 read: IOPS=2195, BW=17.2MiB/s (18.0MB/s)(85.8MiB/5003msec) 00:19:55.171 slat (nsec): min=6939, max=60083, avg=12044.23, stdev=4903.53 00:19:55.171 clat (usec): min=1279, max=6513, avg=3610.57, stdev=1087.02 00:19:55.171 lat (usec): min=1287, max=6531, avg=3622.62, stdev=1087.08 00:19:55.171 clat percentiles (usec): 00:19:55.171 | 1.00th=[ 1975], 5.00th=[ 2114], 10.00th=[ 2147], 20.00th=[ 2278], 00:19:55.171 | 30.00th=[ 2507], 40.00th=[ 2966], 50.00th=[ 4178], 60.00th=[ 4359], 00:19:55.171 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 4883], 00:19:55.171 | 99.00th=[ 5080], 99.50th=[ 5145], 99.90th=[ 5342], 99.95th=[ 5342], 00:19:55.171 | 99.99th=[ 5473] 00:19:55.171 bw ( KiB/s): min=13184, max=19536, per=26.11%, avg=17563.20, stdev=2812.69, samples=10 00:19:55.171 iops : min= 1648, max= 2442, avg=2195.40, stdev=351.59, samples=10 00:19:55.171 lat (msec) : 2=1.32%, 4=44.03%, 10=54.65% 00:19:55.171 cpu : usr=91.30%, sys=7.78%, ctx=46, majf=0, minf=0 00:19:55.171 IO depths : 1=0.1%, 2=5.3%, 4=60.8%, 8=33.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.171 complete : 0=0.0%, 4=98.0%, 8=2.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.171 issued rwts: total=10985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.171 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:55.171 filename1: (groupid=0, jobs=1): err= 0: pid=86757: Thu Dec 5 06:44:50 2024 00:19:55.171 read: IOPS=1948, BW=15.2MiB/s (16.0MB/s)(76.1MiB/5001msec) 00:19:55.171 slat (nsec): min=7441, max=56613, avg=15009.98, stdev=4374.61 00:19:55.171 clat (usec): min=1107, max=7006, avg=4056.54, stdev=860.96 00:19:55.171 lat (usec): min=1120, max=7021, avg=4071.55, stdev=860.58 00:19:55.171 clat percentiles (usec): 00:19:55.171 | 1.00th=[ 2040], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2769], 00:19:55.171 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4424], 60.00th=[ 4490], 00:19:55.171 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4752], 00:19:55.171 | 99.00th=[ 4948], 99.50th=[ 5080], 99.90th=[ 5604], 99.95th=[ 5604], 00:19:55.171 | 99.99th=[ 6980] 00:19:55.171 bw ( KiB/s): min=13824, max=18256, per=22.74%, avg=15297.78, stdev=1843.14, samples=9 00:19:55.171 iops : min= 1728, max= 2282, avg=1912.22, stdev=230.39, samples=9 00:19:55.171 lat (msec) : 2=0.70%, 4=22.84%, 10=76.46% 00:19:55.171 cpu : usr=92.40%, sys=6.82%, ctx=7, majf=0, minf=9 00:19:55.171 IO depths : 1=0.1%, 2=14.4%, 4=55.8%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.171 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.171 issued rwts: total=9746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.171 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:55.171 00:19:55.171 Run status group 0 (all jobs): 00:19:55.171 READ: bw=65.7MiB/s (68.9MB/s), 14.8MiB/s-18.5MiB/s (15.5MB/s-19.4MB/s), io=329MiB (345MB), run=5001-5003msec 00:19:55.430 06:44:50 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:55.430 06:44:50 -- target/dif.sh@43 -- # local sub 00:19:55.430 06:44:50 -- target/dif.sh@45 -- # for sub in "$@" 00:19:55.431 06:44:50 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:55.431 06:44:50 -- target/dif.sh@36 -- # local sub_id=0 00:19:55.431 06:44:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:55.431 06:44:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.431 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:19:55.431 06:44:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.431 06:44:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:55.431 06:44:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.431 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:19:55.431 06:44:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.431 06:44:50 -- target/dif.sh@45 -- # for sub in "$@" 00:19:55.431 06:44:50 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:55.431 06:44:50 -- target/dif.sh@36 -- # local sub_id=1 00:19:55.431 06:44:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.431 06:44:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.431 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:19:55.431 06:44:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.431 06:44:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:55.431 06:44:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.431 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:19:55.431 ************************************ 00:19:55.431 END TEST fio_dif_rand_params 00:19:55.431 ************************************ 00:19:55.431 06:44:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.431 00:19:55.431 real 0m32.154s 00:19:55.431 user 3m27.201s 00:19:55.431 sys 0m9.276s 00:19:55.431 06:44:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:55.431 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:19:55.431 06:44:50 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:55.431 06:44:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:55.431 06:44:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:55.431 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:19:55.431 ************************************ 00:19:55.431 START TEST fio_dif_digest 00:19:55.431 ************************************ 00:19:55.431 06:44:50 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:19:55.431 06:44:50 -- target/dif.sh@123 -- # local NULL_DIF 00:19:55.431 06:44:50 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:55.431 06:44:50 -- target/dif.sh@125 -- # local hdgst ddgst 00:19:55.431 06:44:50 -- target/dif.sh@127 -- # NULL_DIF=3 00:19:55.431 06:44:50 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:55.431 06:44:50 -- target/dif.sh@127 -- # numjobs=3 00:19:55.431 06:44:50 -- target/dif.sh@127 -- # iodepth=3 00:19:55.431 06:44:50 -- target/dif.sh@127 -- # runtime=10 00:19:55.431 06:44:50 -- target/dif.sh@128 -- # hdgst=true 00:19:55.431 06:44:50 -- target/dif.sh@128 -- # ddgst=true 00:19:55.431 06:44:50 -- target/dif.sh@130 -- # create_subsystems 0 00:19:55.431 06:44:50 -- target/dif.sh@28 -- # local sub 00:19:55.431 06:44:50 -- target/dif.sh@30 -- # for sub in "$@" 00:19:55.431 06:44:50 -- target/dif.sh@31 -- # create_subsystem 0 00:19:55.431 06:44:50 -- target/dif.sh@18 -- # local sub_id=0 00:19:55.431 06:44:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:55.431 06:44:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.431 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:19:55.431 bdev_null0 00:19:55.431 06:44:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.431 06:44:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:55.431 06:44:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.431 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:19:55.431 06:44:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.431 06:44:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:55.431 06:44:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.431 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:19:55.431 06:44:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.431 06:44:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:55.431 06:44:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.431 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:19:55.431 [2024-12-05 06:44:50.791368] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.431 06:44:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.431 06:44:50 -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:55.431 06:44:50 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:55.431 06:44:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:55.431 06:44:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:55.431 06:44:50 -- nvmf/common.sh@520 -- # config=() 00:19:55.431 06:44:50 -- nvmf/common.sh@520 -- # local subsystem config 00:19:55.431 06:44:50 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:55.431 06:44:50 -- target/dif.sh@82 -- # gen_fio_conf 00:19:55.431 06:44:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:55.431 06:44:50 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:55.431 06:44:50 -- target/dif.sh@54 -- # local file 00:19:55.431 06:44:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:55.431 { 00:19:55.431 "params": { 00:19:55.431 "name": "Nvme$subsystem", 00:19:55.431 "trtype": "$TEST_TRANSPORT", 00:19:55.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.431 "adrfam": "ipv4", 00:19:55.431 "trsvcid": "$NVMF_PORT", 00:19:55.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.431 "hdgst": ${hdgst:-false}, 00:19:55.431 "ddgst": ${ddgst:-false} 00:19:55.431 }, 00:19:55.431 "method": "bdev_nvme_attach_controller" 00:19:55.431 } 00:19:55.431 EOF 00:19:55.431 )") 00:19:55.431 06:44:50 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:55.431 06:44:50 -- target/dif.sh@56 -- # cat 00:19:55.431 06:44:50 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:55.431 06:44:50 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.431 06:44:50 -- common/autotest_common.sh@1330 -- # shift 00:19:55.431 06:44:50 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:55.431 06:44:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:55.431 06:44:50 -- nvmf/common.sh@542 -- # cat 00:19:55.431 06:44:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:55.431 06:44:50 -- target/dif.sh@72 -- # (( file <= files )) 00:19:55.431 06:44:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.431 06:44:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:55.431 06:44:50 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:55.431 06:44:50 -- nvmf/common.sh@544 -- # jq . 00:19:55.431 06:44:50 -- nvmf/common.sh@545 -- # IFS=, 00:19:55.431 06:44:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:55.431 "params": { 00:19:55.431 "name": "Nvme0", 00:19:55.431 "trtype": "tcp", 00:19:55.431 "traddr": "10.0.0.2", 00:19:55.431 "adrfam": "ipv4", 00:19:55.431 "trsvcid": "4420", 00:19:55.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.431 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:55.431 "hdgst": true, 00:19:55.431 "ddgst": true 00:19:55.431 }, 00:19:55.431 "method": "bdev_nvme_attach_controller" 00:19:55.431 }' 00:19:55.431 06:44:50 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:55.431 06:44:50 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:55.431 06:44:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:55.432 06:44:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.432 06:44:50 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:55.432 06:44:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:55.432 06:44:50 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:55.432 06:44:50 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:55.432 06:44:50 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:55.432 06:44:50 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:55.690 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:55.690 ... 00:19:55.690 fio-3.35 00:19:55.690 Starting 3 threads 00:19:55.950 [2024-12-05 06:44:51.293988] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:55.950 [2024-12-05 06:44:51.294050] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:08.185 00:20:08.185 filename0: (groupid=0, jobs=1): err= 0: pid=86863: Thu Dec 5 06:45:01 2024 00:20:08.185 read: IOPS=233, BW=29.1MiB/s (30.5MB/s)(291MiB/10001msec) 00:20:08.185 slat (nsec): min=6837, max=51769, avg=15385.43, stdev=6226.14 00:20:08.185 clat (usec): min=11796, max=16860, avg=12838.20, stdev=605.76 00:20:08.185 lat (usec): min=11803, max=16885, avg=12853.59, stdev=606.09 00:20:08.185 clat percentiles (usec): 00:20:08.185 | 1.00th=[11863], 5.00th=[11994], 10.00th=[12125], 20.00th=[12256], 00:20:08.185 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:20:08.185 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13698], 95.00th=[13960], 00:20:08.185 | 99.00th=[14222], 99.50th=[14353], 99.90th=[16909], 99.95th=[16909], 00:20:08.185 | 99.99th=[16909] 00:20:08.185 bw ( KiB/s): min=28416, max=30720, per=33.37%, avg=29868.00, stdev=621.26, samples=19 00:20:08.185 iops : min= 222, max= 240, avg=233.32, stdev= 4.85, samples=19 00:20:08.185 lat (msec) : 20=100.00% 00:20:08.185 cpu : usr=91.63%, sys=7.87%, ctx=15, majf=0, minf=0 00:20:08.185 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.185 issued rwts: total=2331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.185 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:08.185 filename0: (groupid=0, jobs=1): err= 0: pid=86864: Thu Dec 5 06:45:01 2024 00:20:08.185 read: IOPS=233, BW=29.1MiB/s (30.6MB/s)(292MiB/10009msec) 00:20:08.185 slat (nsec): min=7123, max=56027, avg=16321.81, stdev=5590.78 00:20:08.185 clat (usec): min=9461, max=14723, avg=12828.00, stdev=603.85 00:20:08.185 lat (usec): min=9476, max=14748, avg=12844.32, stdev=604.27 00:20:08.185 clat percentiles (usec): 00:20:08.185 | 1.00th=[11863], 5.00th=[11994], 10.00th=[12125], 20.00th=[12256], 00:20:08.185 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:20:08.185 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13698], 95.00th=[13960], 00:20:08.185 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14746], 99.95th=[14746], 00:20:08.185 | 99.99th=[14746] 00:20:08.185 bw ( KiB/s): min=28416, max=30720, per=33.38%, avg=29874.32, stdev=622.13, samples=19 00:20:08.185 iops : min= 222, max= 240, avg=233.37, stdev= 4.86, samples=19 00:20:08.185 lat (msec) : 10=0.13%, 20=99.87% 00:20:08.185 cpu : usr=91.36%, sys=8.03%, ctx=19, majf=0, minf=0 00:20:08.185 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.185 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.185 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:08.185 filename0: (groupid=0, jobs=1): err= 0: pid=86865: Thu Dec 5 06:45:01 2024 00:20:08.185 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(292MiB/10008msec) 00:20:08.185 slat (usec): min=6, max=162, avg=16.63, stdev= 6.63 00:20:08.185 clat (usec): min=9560, max=14403, avg=12826.45, stdev=600.40 00:20:08.185 lat (usec): min=9574, max=14419, avg=12843.08, stdev=601.01 00:20:08.185 clat percentiles (usec): 00:20:08.185 | 1.00th=[11863], 5.00th=[11994], 10.00th=[12125], 20.00th=[12256], 00:20:08.185 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:20:08.185 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13698], 95.00th=[13960], 00:20:08.185 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14353], 99.95th=[14353], 00:20:08.185 | 99.99th=[14353] 00:20:08.185 bw ( KiB/s): min=28416, max=30720, per=33.38%, avg=29874.32, stdev=622.13, samples=19 00:20:08.185 iops : min= 222, max= 240, avg=233.37, stdev= 4.86, samples=19 00:20:08.185 lat (msec) : 10=0.13%, 20=99.87% 00:20:08.185 cpu : usr=91.76%, sys=7.42%, ctx=96, majf=0, minf=11 00:20:08.185 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:08.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:08.185 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:08.185 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:08.185 00:20:08.185 Run status group 0 (all jobs): 00:20:08.185 READ: bw=87.4MiB/s (91.7MB/s), 29.1MiB/s-29.2MiB/s (30.5MB/s-30.6MB/s), io=875MiB (917MB), run=10001-10009msec 00:20:08.185 06:45:01 -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:08.185 06:45:01 -- target/dif.sh@43 -- # local sub 00:20:08.185 06:45:01 -- target/dif.sh@45 -- # for sub in "$@" 00:20:08.185 06:45:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:08.185 06:45:01 -- target/dif.sh@36 -- # local sub_id=0 00:20:08.185 06:45:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:08.185 06:45:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.185 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:20:08.185 06:45:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.185 06:45:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:08.185 06:45:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.185 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:20:08.185 ************************************ 00:20:08.185 END TEST fio_dif_digest 00:20:08.185 ************************************ 00:20:08.185 06:45:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.185 00:20:08.185 real 0m10.839s 00:20:08.185 user 0m28.021s 00:20:08.185 sys 0m2.567s 00:20:08.185 06:45:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:08.185 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:20:08.185 06:45:01 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:08.185 06:45:01 -- target/dif.sh@147 -- # nvmftestfini 00:20:08.185 06:45:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:08.185 06:45:01 -- nvmf/common.sh@116 -- # sync 00:20:08.185 06:45:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:08.185 06:45:01 -- nvmf/common.sh@119 -- # set +e 00:20:08.185 06:45:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:08.185 06:45:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:08.185 rmmod nvme_tcp 00:20:08.185 rmmod nvme_fabrics 00:20:08.185 rmmod nvme_keyring 00:20:08.185 06:45:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:08.185 06:45:01 -- nvmf/common.sh@123 -- # set -e 00:20:08.185 06:45:01 -- nvmf/common.sh@124 -- # return 0 00:20:08.185 06:45:01 -- nvmf/common.sh@477 -- # '[' -n 86032 ']' 00:20:08.185 06:45:01 -- nvmf/common.sh@478 -- # killprocess 86032 00:20:08.185 06:45:01 -- common/autotest_common.sh@936 -- # '[' -z 86032 ']' 00:20:08.185 06:45:01 -- common/autotest_common.sh@940 -- # kill -0 86032 00:20:08.185 06:45:01 -- common/autotest_common.sh@941 -- # uname 00:20:08.185 06:45:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:08.185 06:45:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86032 00:20:08.185 killing process with pid 86032 00:20:08.185 06:45:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:08.185 06:45:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:08.186 06:45:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86032' 00:20:08.186 06:45:01 -- common/autotest_common.sh@955 -- # kill 86032 00:20:08.186 06:45:01 -- common/autotest_common.sh@960 -- # wait 86032 00:20:08.186 06:45:01 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:20:08.186 06:45:01 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:08.186 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:08.186 Waiting for block devices as requested 00:20:08.186 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:20:08.186 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:20:08.186 06:45:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:08.186 06:45:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:08.186 06:45:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.186 06:45:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:08.186 06:45:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.186 06:45:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:08.186 06:45:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.186 06:45:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:08.186 ************************************ 00:20:08.186 END TEST nvmf_dif 00:20:08.186 ************************************ 00:20:08.186 00:20:08.186 real 1m7.265s 00:20:08.186 user 5m14.286s 00:20:08.186 sys 0m20.545s 00:20:08.186 06:45:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:08.186 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:20:08.186 06:45:02 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:08.186 06:45:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:08.186 06:45:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:08.186 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:20:08.186 ************************************ 00:20:08.186 START TEST nvmf_abort_qd_sizes 00:20:08.186 ************************************ 00:20:08.186 06:45:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:08.186 * Looking for test storage... 00:20:08.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:08.186 06:45:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:08.186 06:45:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:08.186 06:45:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:08.186 06:45:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:08.186 06:45:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:08.186 06:45:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:08.186 06:45:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:08.186 06:45:02 -- scripts/common.sh@335 -- # IFS=.-: 00:20:08.186 06:45:02 -- scripts/common.sh@335 -- # read -ra ver1 00:20:08.186 06:45:02 -- scripts/common.sh@336 -- # IFS=.-: 00:20:08.186 06:45:02 -- scripts/common.sh@336 -- # read -ra ver2 00:20:08.186 06:45:02 -- scripts/common.sh@337 -- # local 'op=<' 00:20:08.186 06:45:02 -- scripts/common.sh@339 -- # ver1_l=2 00:20:08.186 06:45:02 -- scripts/common.sh@340 -- # ver2_l=1 00:20:08.186 06:45:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:08.186 06:45:02 -- scripts/common.sh@343 -- # case "$op" in 00:20:08.186 06:45:02 -- scripts/common.sh@344 -- # : 1 00:20:08.186 06:45:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:08.186 06:45:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.186 06:45:02 -- scripts/common.sh@364 -- # decimal 1 00:20:08.186 06:45:02 -- scripts/common.sh@352 -- # local d=1 00:20:08.186 06:45:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:08.186 06:45:02 -- scripts/common.sh@354 -- # echo 1 00:20:08.186 06:45:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:08.186 06:45:02 -- scripts/common.sh@365 -- # decimal 2 00:20:08.186 06:45:02 -- scripts/common.sh@352 -- # local d=2 00:20:08.186 06:45:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:08.186 06:45:02 -- scripts/common.sh@354 -- # echo 2 00:20:08.186 06:45:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:08.186 06:45:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:08.186 06:45:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:08.186 06:45:02 -- scripts/common.sh@367 -- # return 0 00:20:08.186 06:45:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:08.186 06:45:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:08.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.186 --rc genhtml_branch_coverage=1 00:20:08.186 --rc genhtml_function_coverage=1 00:20:08.186 --rc genhtml_legend=1 00:20:08.186 --rc geninfo_all_blocks=1 00:20:08.186 --rc geninfo_unexecuted_blocks=1 00:20:08.186 00:20:08.186 ' 00:20:08.186 06:45:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:08.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.186 --rc genhtml_branch_coverage=1 00:20:08.186 --rc genhtml_function_coverage=1 00:20:08.186 --rc genhtml_legend=1 00:20:08.186 --rc geninfo_all_blocks=1 00:20:08.186 --rc geninfo_unexecuted_blocks=1 00:20:08.186 00:20:08.186 ' 00:20:08.186 06:45:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:08.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.186 --rc genhtml_branch_coverage=1 00:20:08.186 --rc genhtml_function_coverage=1 00:20:08.186 --rc genhtml_legend=1 00:20:08.186 --rc geninfo_all_blocks=1 00:20:08.186 --rc geninfo_unexecuted_blocks=1 00:20:08.186 00:20:08.186 ' 00:20:08.186 06:45:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:08.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.186 --rc genhtml_branch_coverage=1 00:20:08.186 --rc genhtml_function_coverage=1 00:20:08.186 --rc genhtml_legend=1 00:20:08.186 --rc geninfo_all_blocks=1 00:20:08.186 --rc geninfo_unexecuted_blocks=1 00:20:08.186 00:20:08.186 ' 00:20:08.186 06:45:02 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:08.186 06:45:02 -- nvmf/common.sh@7 -- # uname -s 00:20:08.186 06:45:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.186 06:45:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.186 06:45:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.186 06:45:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.186 06:45:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.186 06:45:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.186 06:45:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.186 06:45:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.186 06:45:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.186 06:45:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.186 06:45:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e 00:20:08.186 06:45:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=910f3027-a360-4bbd-806e-4d2cb117dd4e 00:20:08.186 06:45:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.186 06:45:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.186 06:45:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:08.186 06:45:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:08.186 06:45:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.186 06:45:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.186 06:45:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.186 06:45:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.186 06:45:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.186 06:45:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.186 06:45:02 -- paths/export.sh@5 -- # export PATH 00:20:08.186 06:45:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.186 06:45:02 -- nvmf/common.sh@46 -- # : 0 00:20:08.186 06:45:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:08.186 06:45:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:08.186 06:45:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:08.186 06:45:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.186 06:45:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.186 06:45:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:08.186 06:45:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:08.186 06:45:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:08.186 06:45:02 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:20:08.186 06:45:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:08.186 06:45:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.186 06:45:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:08.186 06:45:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:08.186 06:45:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:08.186 06:45:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.186 06:45:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:08.186 06:45:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.186 06:45:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:08.186 06:45:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:08.186 06:45:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:08.186 06:45:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:08.186 06:45:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:08.187 06:45:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:08.187 06:45:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.187 06:45:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.187 06:45:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:08.187 06:45:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:08.187 06:45:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:08.187 06:45:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:08.187 06:45:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:08.187 06:45:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.187 06:45:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:08.187 06:45:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:08.187 06:45:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:08.187 06:45:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:08.187 06:45:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:08.187 06:45:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:08.187 Cannot find device "nvmf_tgt_br" 00:20:08.187 06:45:02 -- nvmf/common.sh@154 -- # true 00:20:08.187 06:45:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:08.187 Cannot find device "nvmf_tgt_br2" 00:20:08.187 06:45:02 -- nvmf/common.sh@155 -- # true 00:20:08.187 06:45:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:08.187 06:45:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:08.187 Cannot find device "nvmf_tgt_br" 00:20:08.187 06:45:02 -- nvmf/common.sh@157 -- # true 00:20:08.187 06:45:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:08.187 Cannot find device "nvmf_tgt_br2" 00:20:08.187 06:45:02 -- nvmf/common.sh@158 -- # true 00:20:08.187 06:45:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:08.187 06:45:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:08.187 06:45:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:08.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.187 06:45:02 -- nvmf/common.sh@161 -- # true 00:20:08.187 06:45:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:08.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.187 06:45:02 -- nvmf/common.sh@162 -- # true 00:20:08.187 06:45:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:08.187 06:45:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:08.187 06:45:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:08.187 06:45:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:08.187 06:45:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:08.187 06:45:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:08.187 06:45:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:08.187 06:45:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:08.187 06:45:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:08.187 06:45:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:08.187 06:45:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:08.187 06:45:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:08.187 06:45:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:08.187 06:45:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:08.187 06:45:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:08.187 06:45:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:08.187 06:45:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:08.187 06:45:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:08.187 06:45:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:08.187 06:45:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:08.187 06:45:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:08.187 06:45:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:08.187 06:45:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:08.187 06:45:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:08.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:20:08.187 00:20:08.187 --- 10.0.0.2 ping statistics --- 00:20:08.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.187 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:20:08.187 06:45:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:08.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:08.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:08.187 00:20:08.187 --- 10.0.0.3 ping statistics --- 00:20:08.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.187 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:08.187 06:45:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:08.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:08.187 00:20:08.187 --- 10.0.0.1 ping statistics --- 00:20:08.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.187 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:08.187 06:45:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.187 06:45:03 -- nvmf/common.sh@421 -- # return 0 00:20:08.187 06:45:03 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:20:08.187 06:45:03 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:08.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:08.705 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:20:08.705 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:20:08.705 06:45:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.705 06:45:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:08.705 06:45:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:08.705 06:45:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.705 06:45:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:08.705 06:45:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:08.705 06:45:04 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:20:08.705 06:45:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:08.705 06:45:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.705 06:45:04 -- common/autotest_common.sh@10 -- # set +x 00:20:08.705 06:45:04 -- nvmf/common.sh@469 -- # nvmfpid=87464 00:20:08.705 06:45:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:08.705 06:45:04 -- nvmf/common.sh@470 -- # waitforlisten 87464 00:20:08.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.705 06:45:04 -- common/autotest_common.sh@829 -- # '[' -z 87464 ']' 00:20:08.705 06:45:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.705 06:45:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.705 06:45:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.705 06:45:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.705 06:45:04 -- common/autotest_common.sh@10 -- # set +x 00:20:08.705 [2024-12-05 06:45:04.161494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:08.705 [2024-12-05 06:45:04.161779] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.964 [2024-12-05 06:45:04.301807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:08.964 [2024-12-05 06:45:04.344034] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:08.964 [2024-12-05 06:45:04.344435] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.964 [2024-12-05 06:45:04.344635] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.964 [2024-12-05 06:45:04.344813] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.964 [2024-12-05 06:45:04.345106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.964 [2024-12-05 06:45:04.345250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.964 [2024-12-05 06:45:04.345333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.964 [2024-12-05 06:45:04.345346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:09.902 06:45:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.902 06:45:05 -- common/autotest_common.sh@862 -- # return 0 00:20:09.902 06:45:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:09.902 06:45:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:09.902 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:20:09.902 06:45:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.902 06:45:05 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:09.902 06:45:05 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:20:09.902 06:45:05 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:20:09.902 06:45:05 -- scripts/common.sh@311 -- # local bdf bdfs 00:20:09.902 06:45:05 -- scripts/common.sh@312 -- # local nvmes 00:20:09.902 06:45:05 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:20:09.902 06:45:05 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:09.902 06:45:05 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:20:09.902 06:45:05 -- scripts/common.sh@297 -- # local bdf= 00:20:09.902 06:45:05 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:20:09.902 06:45:05 -- scripts/common.sh@232 -- # local class 00:20:09.902 06:45:05 -- scripts/common.sh@233 -- # local subclass 00:20:09.902 06:45:05 -- scripts/common.sh@234 -- # local progif 00:20:09.902 06:45:05 -- scripts/common.sh@235 -- # printf %02x 1 00:20:09.902 06:45:05 -- scripts/common.sh@235 -- # class=01 00:20:09.902 06:45:05 -- scripts/common.sh@236 -- # printf %02x 8 00:20:09.902 06:45:05 -- scripts/common.sh@236 -- # subclass=08 00:20:09.902 06:45:05 -- scripts/common.sh@237 -- # printf %02x 2 00:20:09.902 06:45:05 -- scripts/common.sh@237 -- # progif=02 00:20:09.902 06:45:05 -- scripts/common.sh@239 -- # hash lspci 00:20:09.902 06:45:05 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:20:09.902 06:45:05 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:20:09.902 06:45:05 -- scripts/common.sh@242 -- # grep -i -- -p02 00:20:09.902 06:45:05 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:09.902 06:45:05 -- scripts/common.sh@244 -- # tr -d '"' 00:20:09.902 06:45:05 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:09.902 06:45:05 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:20:09.902 06:45:05 -- scripts/common.sh@15 -- # local i 00:20:09.902 06:45:05 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:20:09.902 06:45:05 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:09.902 06:45:05 -- scripts/common.sh@24 -- # return 0 00:20:09.902 06:45:05 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:20:09.902 06:45:05 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:09.902 06:45:05 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:20:09.902 06:45:05 -- scripts/common.sh@15 -- # local i 00:20:09.902 06:45:05 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:20:09.902 06:45:05 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:09.902 06:45:05 -- scripts/common.sh@24 -- # return 0 00:20:09.902 06:45:05 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:20:09.902 06:45:05 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:20:09.902 06:45:05 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:20:09.902 06:45:05 -- scripts/common.sh@322 -- # uname -s 00:20:09.902 06:45:05 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:20:09.902 06:45:05 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:20:09.902 06:45:05 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:20:09.902 06:45:05 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:20:09.902 06:45:05 -- scripts/common.sh@322 -- # uname -s 00:20:09.902 06:45:05 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:20:09.902 06:45:05 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:20:09.902 06:45:05 -- scripts/common.sh@327 -- # (( 2 )) 00:20:09.902 06:45:05 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:20:09.902 06:45:05 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:20:09.902 06:45:05 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:20:09.902 06:45:05 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:20:09.902 06:45:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:09.902 06:45:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:09.902 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:20:09.902 ************************************ 00:20:09.903 START TEST spdk_target_abort 00:20:09.903 ************************************ 00:20:09.903 06:45:05 -- common/autotest_common.sh@1114 -- # spdk_target 00:20:09.903 06:45:05 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:09.903 06:45:05 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:20:09.903 06:45:05 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:20:09.903 06:45:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.903 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:20:09.903 spdk_targetn1 00:20:09.903 06:45:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.903 06:45:05 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:09.903 06:45:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.903 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:20:09.903 [2024-12-05 06:45:05.349297] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.903 06:45:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.903 06:45:05 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:20:09.903 06:45:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.903 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:20:09.903 06:45:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.903 06:45:05 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:20:09.903 06:45:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.903 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:20:10.161 06:45:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:20:10.161 06:45:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.161 06:45:05 -- common/autotest_common.sh@10 -- # set +x 00:20:10.161 [2024-12-05 06:45:05.377515] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.161 06:45:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:10.161 06:45:05 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:13.453 Initializing NVMe Controllers 00:20:13.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:13.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:13.453 Initialization complete. Launching workers. 00:20:13.453 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10235, failed: 0 00:20:13.453 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1031, failed to submit 9204 00:20:13.453 success 749, unsuccess 282, failed 0 00:20:13.453 06:45:08 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:13.453 06:45:08 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:16.743 Initializing NVMe Controllers 00:20:16.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:16.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:16.743 Initialization complete. Launching workers. 00:20:16.743 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8976, failed: 0 00:20:16.743 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1138, failed to submit 7838 00:20:16.743 success 422, unsuccess 716, failed 0 00:20:16.743 06:45:11 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:16.743 06:45:11 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:20.032 Initializing NVMe Controllers 00:20:20.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:20.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:20.032 Initialization complete. Launching workers. 00:20:20.032 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31076, failed: 0 00:20:20.032 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2260, failed to submit 28816 00:20:20.032 success 419, unsuccess 1841, failed 0 00:20:20.032 06:45:15 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:20:20.032 06:45:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.032 06:45:15 -- common/autotest_common.sh@10 -- # set +x 00:20:20.032 06:45:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.032 06:45:15 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:20.032 06:45:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.032 06:45:15 -- common/autotest_common.sh@10 -- # set +x 00:20:20.032 06:45:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.032 06:45:15 -- target/abort_qd_sizes.sh@62 -- # killprocess 87464 00:20:20.032 06:45:15 -- common/autotest_common.sh@936 -- # '[' -z 87464 ']' 00:20:20.032 06:45:15 -- common/autotest_common.sh@940 -- # kill -0 87464 00:20:20.032 06:45:15 -- common/autotest_common.sh@941 -- # uname 00:20:20.032 06:45:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:20.032 06:45:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87464 00:20:20.290 killing process with pid 87464 00:20:20.290 06:45:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:20.290 06:45:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:20.290 06:45:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87464' 00:20:20.290 06:45:15 -- common/autotest_common.sh@955 -- # kill 87464 00:20:20.290 06:45:15 -- common/autotest_common.sh@960 -- # wait 87464 00:20:20.290 00:20:20.290 real 0m10.366s 00:20:20.290 user 0m42.607s 00:20:20.290 sys 0m2.077s 00:20:20.290 06:45:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:20.290 06:45:15 -- common/autotest_common.sh@10 -- # set +x 00:20:20.290 ************************************ 00:20:20.290 END TEST spdk_target_abort 00:20:20.290 ************************************ 00:20:20.290 06:45:15 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:20:20.290 06:45:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:20.290 06:45:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:20.290 06:45:15 -- common/autotest_common.sh@10 -- # set +x 00:20:20.290 ************************************ 00:20:20.290 START TEST kernel_target_abort 00:20:20.290 ************************************ 00:20:20.290 06:45:15 -- common/autotest_common.sh@1114 -- # kernel_target 00:20:20.290 06:45:15 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:20:20.290 06:45:15 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:20:20.290 06:45:15 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:20:20.290 06:45:15 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:20:20.290 06:45:15 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:20:20.290 06:45:15 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:20.290 06:45:15 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:20.290 06:45:15 -- nvmf/common.sh@627 -- # local block nvme 00:20:20.290 06:45:15 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:20:20.290 06:45:15 -- nvmf/common.sh@630 -- # modprobe nvmet 00:20:20.291 06:45:15 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:20.291 06:45:15 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:20.891 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:20.892 Waiting for block devices as requested 00:20:20.892 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:20:20.892 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:20:20.892 06:45:16 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:20.892 06:45:16 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:20.892 06:45:16 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:20:20.892 06:45:16 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:20:20.892 06:45:16 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:20.892 No valid GPT data, bailing 00:20:21.150 06:45:16 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:21.150 06:45:16 -- scripts/common.sh@393 -- # pt= 00:20:21.150 06:45:16 -- scripts/common.sh@394 -- # return 1 00:20:21.150 06:45:16 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:20:21.150 06:45:16 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:21.150 06:45:16 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:21.150 06:45:16 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:20:21.150 06:45:16 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:20:21.151 06:45:16 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:21.151 No valid GPT data, bailing 00:20:21.151 06:45:16 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:21.151 06:45:16 -- scripts/common.sh@393 -- # pt= 00:20:21.151 06:45:16 -- scripts/common.sh@394 -- # return 1 00:20:21.151 06:45:16 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:20:21.151 06:45:16 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:21.151 06:45:16 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:20:21.151 06:45:16 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:20:21.151 06:45:16 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:20:21.151 06:45:16 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:20:21.151 No valid GPT data, bailing 00:20:21.151 06:45:16 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:20:21.151 06:45:16 -- scripts/common.sh@393 -- # pt= 00:20:21.151 06:45:16 -- scripts/common.sh@394 -- # return 1 00:20:21.151 06:45:16 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:20:21.151 06:45:16 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:21.151 06:45:16 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:20:21.151 06:45:16 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:20:21.151 06:45:16 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:20:21.151 06:45:16 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:20:21.151 No valid GPT data, bailing 00:20:21.151 06:45:16 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:20:21.151 06:45:16 -- scripts/common.sh@393 -- # pt= 00:20:21.151 06:45:16 -- scripts/common.sh@394 -- # return 1 00:20:21.151 06:45:16 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:20:21.151 06:45:16 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:20:21.151 06:45:16 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:20:21.151 06:45:16 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:21.151 06:45:16 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:21.151 06:45:16 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:20:21.151 06:45:16 -- nvmf/common.sh@654 -- # echo 1 00:20:21.151 06:45:16 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:20:21.151 06:45:16 -- nvmf/common.sh@656 -- # echo 1 00:20:21.151 06:45:16 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:20:21.151 06:45:16 -- nvmf/common.sh@663 -- # echo tcp 00:20:21.151 06:45:16 -- nvmf/common.sh@664 -- # echo 4420 00:20:21.151 06:45:16 -- nvmf/common.sh@665 -- # echo ipv4 00:20:21.151 06:45:16 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:21.151 06:45:16 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:910f3027-a360-4bbd-806e-4d2cb117dd4e --hostid=910f3027-a360-4bbd-806e-4d2cb117dd4e -a 10.0.0.1 -t tcp -s 4420 00:20:21.410 00:20:21.410 Discovery Log Number of Records 2, Generation counter 2 00:20:21.410 =====Discovery Log Entry 0====== 00:20:21.410 trtype: tcp 00:20:21.410 adrfam: ipv4 00:20:21.410 subtype: current discovery subsystem 00:20:21.410 treq: not specified, sq flow control disable supported 00:20:21.410 portid: 1 00:20:21.410 trsvcid: 4420 00:20:21.410 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:21.410 traddr: 10.0.0.1 00:20:21.410 eflags: none 00:20:21.410 sectype: none 00:20:21.410 =====Discovery Log Entry 1====== 00:20:21.410 trtype: tcp 00:20:21.410 adrfam: ipv4 00:20:21.410 subtype: nvme subsystem 00:20:21.410 treq: not specified, sq flow control disable supported 00:20:21.410 portid: 1 00:20:21.410 trsvcid: 4420 00:20:21.410 subnqn: kernel_target 00:20:21.410 traddr: 10.0.0.1 00:20:21.410 eflags: none 00:20:21.410 sectype: none 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:21.410 06:45:16 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:24.701 Initializing NVMe Controllers 00:20:24.701 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:24.701 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:24.701 Initialization complete. Launching workers. 00:20:24.701 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31300, failed: 0 00:20:24.701 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31300, failed to submit 0 00:20:24.701 success 0, unsuccess 31300, failed 0 00:20:24.701 06:45:19 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:24.701 06:45:19 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:27.992 Initializing NVMe Controllers 00:20:27.992 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:27.992 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:27.992 Initialization complete. Launching workers. 00:20:27.992 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 66086, failed: 0 00:20:27.992 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 28178, failed to submit 37908 00:20:27.992 success 0, unsuccess 28178, failed 0 00:20:27.992 06:45:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:27.992 06:45:22 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:31.280 Initializing NVMe Controllers 00:20:31.280 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:31.280 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:31.280 Initialization complete. Launching workers. 00:20:31.280 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 73816, failed: 0 00:20:31.280 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18456, failed to submit 55360 00:20:31.280 success 0, unsuccess 18456, failed 0 00:20:31.280 06:45:26 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:20:31.280 06:45:26 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:20:31.280 06:45:26 -- nvmf/common.sh@677 -- # echo 0 00:20:31.280 06:45:26 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:20:31.280 06:45:26 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:31.280 06:45:26 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:31.280 06:45:26 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:20:31.280 06:45:26 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:20:31.280 06:45:26 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:20:31.280 ************************************ 00:20:31.280 END TEST kernel_target_abort 00:20:31.280 ************************************ 00:20:31.280 00:20:31.280 real 0m10.512s 00:20:31.280 user 0m5.606s 00:20:31.280 sys 0m2.384s 00:20:31.280 06:45:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:31.280 06:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:31.280 06:45:26 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:20:31.280 06:45:26 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:20:31.280 06:45:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:31.280 06:45:26 -- nvmf/common.sh@116 -- # sync 00:20:31.280 06:45:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:31.280 06:45:26 -- nvmf/common.sh@119 -- # set +e 00:20:31.280 06:45:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:31.280 06:45:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:31.280 rmmod nvme_tcp 00:20:31.280 rmmod nvme_fabrics 00:20:31.280 rmmod nvme_keyring 00:20:31.280 06:45:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:31.280 06:45:26 -- nvmf/common.sh@123 -- # set -e 00:20:31.280 06:45:26 -- nvmf/common.sh@124 -- # return 0 00:20:31.280 06:45:26 -- nvmf/common.sh@477 -- # '[' -n 87464 ']' 00:20:31.280 06:45:26 -- nvmf/common.sh@478 -- # killprocess 87464 00:20:31.280 06:45:26 -- common/autotest_common.sh@936 -- # '[' -z 87464 ']' 00:20:31.280 06:45:26 -- common/autotest_common.sh@940 -- # kill -0 87464 00:20:31.280 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87464) - No such process 00:20:31.280 Process with pid 87464 is not found 00:20:31.280 06:45:26 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87464 is not found' 00:20:31.280 06:45:26 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:20:31.280 06:45:26 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:31.847 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:31.847 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:31.847 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:31.847 06:45:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:31.847 06:45:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:31.847 06:45:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:31.847 06:45:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:31.847 06:45:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.847 06:45:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:31.847 06:45:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.847 06:45:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:31.847 ************************************ 00:20:31.847 END TEST nvmf_abort_qd_sizes 00:20:31.847 ************************************ 00:20:31.847 00:20:31.847 real 0m24.550s 00:20:31.847 user 0m49.720s 00:20:31.847 sys 0m5.773s 00:20:31.847 06:45:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:31.847 06:45:27 -- common/autotest_common.sh@10 -- # set +x 00:20:31.847 06:45:27 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:20:31.847 06:45:27 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:20:31.847 06:45:27 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:20:31.847 06:45:27 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:31.847 06:45:27 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:31.847 06:45:27 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:20:31.847 06:45:27 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:31.847 06:45:27 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:31.847 06:45:27 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:20:31.847 06:45:27 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:31.847 06:45:27 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:31.847 06:45:27 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:20:31.847 06:45:27 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:20:31.847 06:45:27 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:20:31.847 06:45:27 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:20:31.847 06:45:27 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:20:31.847 06:45:27 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:20:31.847 06:45:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:31.847 06:45:27 -- common/autotest_common.sh@10 -- # set +x 00:20:31.847 06:45:27 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:20:31.847 06:45:27 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:20:31.847 06:45:27 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:20:31.847 06:45:27 -- common/autotest_common.sh@10 -- # set +x 00:20:33.755 INFO: APP EXITING 00:20:33.755 INFO: killing all VMs 00:20:33.755 INFO: killing vhost app 00:20:33.755 INFO: EXIT DONE 00:20:34.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:34.323 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:34.323 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:34.891 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:35.149 Cleaning 00:20:35.149 Removing: /var/run/dpdk/spdk0/config 00:20:35.149 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:35.149 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:35.149 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:35.149 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:35.149 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:35.149 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:35.149 Removing: /var/run/dpdk/spdk1/config 00:20:35.149 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:35.149 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:35.149 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:35.149 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:35.149 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:35.149 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:35.149 Removing: /var/run/dpdk/spdk2/config 00:20:35.149 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:35.149 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:35.149 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:35.149 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:35.149 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:35.149 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:35.149 Removing: /var/run/dpdk/spdk3/config 00:20:35.149 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:35.149 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:35.149 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:35.149 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:35.149 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:35.149 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:35.149 Removing: /var/run/dpdk/spdk4/config 00:20:35.149 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:35.149 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:35.149 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:35.149 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:35.149 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:35.149 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:35.149 Removing: /dev/shm/nvmf_trace.0 00:20:35.149 Removing: /dev/shm/spdk_tgt_trace.pid65590 00:20:35.149 Removing: /var/run/dpdk/spdk0 00:20:35.149 Removing: /var/run/dpdk/spdk1 00:20:35.149 Removing: /var/run/dpdk/spdk2 00:20:35.149 Removing: /var/run/dpdk/spdk3 00:20:35.149 Removing: /var/run/dpdk/spdk4 00:20:35.149 Removing: /var/run/dpdk/spdk_pid65438 00:20:35.149 Removing: /var/run/dpdk/spdk_pid65590 00:20:35.149 Removing: /var/run/dpdk/spdk_pid65843 00:20:35.149 Removing: /var/run/dpdk/spdk_pid66028 00:20:35.149 Removing: /var/run/dpdk/spdk_pid66180 00:20:35.149 Removing: /var/run/dpdk/spdk_pid66247 00:20:35.149 Removing: /var/run/dpdk/spdk_pid66330 00:20:35.149 Removing: /var/run/dpdk/spdk_pid66428 00:20:35.149 Removing: /var/run/dpdk/spdk_pid66501 00:20:35.149 Removing: /var/run/dpdk/spdk_pid66545 00:20:35.149 Removing: /var/run/dpdk/spdk_pid66575 00:20:35.149 Removing: /var/run/dpdk/spdk_pid66638 00:20:35.149 Removing: /var/run/dpdk/spdk_pid66708 00:20:35.149 Removing: /var/run/dpdk/spdk_pid67140 00:20:35.149 Removing: /var/run/dpdk/spdk_pid67187 00:20:35.149 Removing: /var/run/dpdk/spdk_pid67232 00:20:35.149 Removing: /var/run/dpdk/spdk_pid67254 00:20:35.149 Removing: /var/run/dpdk/spdk_pid67310 00:20:35.149 Removing: /var/run/dpdk/spdk_pid67326 00:20:35.149 Removing: /var/run/dpdk/spdk_pid67387 00:20:35.149 Removing: /var/run/dpdk/spdk_pid67403 00:20:35.149 Removing: /var/run/dpdk/spdk_pid67449 00:20:35.149 Removing: /var/run/dpdk/spdk_pid67467 00:20:35.408 Removing: /var/run/dpdk/spdk_pid67507 00:20:35.408 Removing: /var/run/dpdk/spdk_pid67525 00:20:35.408 Removing: /var/run/dpdk/spdk_pid67649 00:20:35.408 Removing: /var/run/dpdk/spdk_pid67690 00:20:35.408 Removing: /var/run/dpdk/spdk_pid67766 00:20:35.408 Removing: /var/run/dpdk/spdk_pid67812 00:20:35.408 Removing: /var/run/dpdk/spdk_pid67842 00:20:35.408 Removing: /var/run/dpdk/spdk_pid67895 00:20:35.408 Removing: /var/run/dpdk/spdk_pid67920 00:20:35.408 Removing: /var/run/dpdk/spdk_pid67949 00:20:35.408 Removing: /var/run/dpdk/spdk_pid67965 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68000 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68019 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68048 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68062 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68097 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68116 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68145 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68165 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68198 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68213 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68250 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68264 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68300 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68314 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68350 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68364 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68398 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68418 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68447 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68461 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68501 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68515 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68544 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68569 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68598 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68612 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68642 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68666 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68695 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68712 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68750 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68772 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68804 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68824 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68858 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68872 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68908 00:20:35.408 Removing: /var/run/dpdk/spdk_pid68979 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69074 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69406 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69422 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69454 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69467 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69475 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69493 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69511 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69519 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69537 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69555 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69563 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69581 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69590 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69607 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69625 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69632 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69651 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69669 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69676 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69684 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69719 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69728 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69759 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69829 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69850 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69859 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69888 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69892 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69900 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69940 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69947 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69978 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69980 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69988 00:20:35.408 Removing: /var/run/dpdk/spdk_pid69995 00:20:35.667 Removing: /var/run/dpdk/spdk_pid69997 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70010 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70012 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70020 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70046 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70067 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70077 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70105 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70115 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70117 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70157 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70169 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70195 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70203 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70205 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70212 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70220 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70227 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70235 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70237 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70318 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70360 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70466 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70494 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70536 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70556 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70565 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70585 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70620 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70629 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70705 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70719 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70762 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70842 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70893 00:20:35.667 Removing: /var/run/dpdk/spdk_pid70916 00:20:35.667 Removing: /var/run/dpdk/spdk_pid71010 00:20:35.667 Removing: /var/run/dpdk/spdk_pid71051 00:20:35.667 Removing: /var/run/dpdk/spdk_pid71088 00:20:35.667 Removing: /var/run/dpdk/spdk_pid71306 00:20:35.667 Removing: /var/run/dpdk/spdk_pid71398 00:20:35.667 Removing: /var/run/dpdk/spdk_pid71425 00:20:35.667 Removing: /var/run/dpdk/spdk_pid71755 00:20:35.667 Removing: /var/run/dpdk/spdk_pid71793 00:20:35.667 Removing: /var/run/dpdk/spdk_pid72101 00:20:35.667 Removing: /var/run/dpdk/spdk_pid72513 00:20:35.667 Removing: /var/run/dpdk/spdk_pid72788 00:20:35.667 Removing: /var/run/dpdk/spdk_pid73545 00:20:35.667 Removing: /var/run/dpdk/spdk_pid74368 00:20:35.667 Removing: /var/run/dpdk/spdk_pid74480 00:20:35.667 Removing: /var/run/dpdk/spdk_pid74548 00:20:35.667 Removing: /var/run/dpdk/spdk_pid75825 00:20:35.667 Removing: /var/run/dpdk/spdk_pid76041 00:20:35.667 Removing: /var/run/dpdk/spdk_pid76347 00:20:35.667 Removing: /var/run/dpdk/spdk_pid76457 00:20:35.667 Removing: /var/run/dpdk/spdk_pid76590 00:20:35.667 Removing: /var/run/dpdk/spdk_pid76618 00:20:35.667 Removing: /var/run/dpdk/spdk_pid76638 00:20:35.667 Removing: /var/run/dpdk/spdk_pid76660 00:20:35.667 Removing: /var/run/dpdk/spdk_pid76746 00:20:35.667 Removing: /var/run/dpdk/spdk_pid76888 00:20:35.667 Removing: /var/run/dpdk/spdk_pid77031 00:20:35.667 Removing: /var/run/dpdk/spdk_pid77106 00:20:35.667 Removing: /var/run/dpdk/spdk_pid77500 00:20:35.667 Removing: /var/run/dpdk/spdk_pid77838 00:20:35.667 Removing: /var/run/dpdk/spdk_pid77844 00:20:35.667 Removing: /var/run/dpdk/spdk_pid80053 00:20:35.667 Removing: /var/run/dpdk/spdk_pid80055 00:20:35.667 Removing: /var/run/dpdk/spdk_pid80328 00:20:35.667 Removing: /var/run/dpdk/spdk_pid80349 00:20:35.667 Removing: /var/run/dpdk/spdk_pid80363 00:20:35.667 Removing: /var/run/dpdk/spdk_pid80394 00:20:35.667 Removing: /var/run/dpdk/spdk_pid80403 00:20:35.667 Removing: /var/run/dpdk/spdk_pid80483 00:20:35.667 Removing: /var/run/dpdk/spdk_pid80490 00:20:35.667 Removing: /var/run/dpdk/spdk_pid80598 00:20:35.667 Removing: /var/run/dpdk/spdk_pid80606 00:20:35.667 Removing: /var/run/dpdk/spdk_pid80714 00:20:35.667 Removing: /var/run/dpdk/spdk_pid80716 00:20:35.667 Removing: /var/run/dpdk/spdk_pid81125 00:20:35.667 Removing: /var/run/dpdk/spdk_pid81168 00:20:35.667 Removing: /var/run/dpdk/spdk_pid81277 00:20:35.667 Removing: /var/run/dpdk/spdk_pid81356 00:20:35.667 Removing: /var/run/dpdk/spdk_pid81669 00:20:35.667 Removing: /var/run/dpdk/spdk_pid81871 00:20:35.667 Removing: /var/run/dpdk/spdk_pid82256 00:20:36.015 Removing: /var/run/dpdk/spdk_pid82780 00:20:36.015 Removing: /var/run/dpdk/spdk_pid83227 00:20:36.015 Removing: /var/run/dpdk/spdk_pid83275 00:20:36.015 Removing: /var/run/dpdk/spdk_pid83328 00:20:36.015 Removing: /var/run/dpdk/spdk_pid83384 00:20:36.015 Removing: /var/run/dpdk/spdk_pid83485 00:20:36.015 Removing: /var/run/dpdk/spdk_pid83547 00:20:36.015 Removing: /var/run/dpdk/spdk_pid83603 00:20:36.015 Removing: /var/run/dpdk/spdk_pid83663 00:20:36.015 Removing: /var/run/dpdk/spdk_pid83973 00:20:36.015 Removing: /var/run/dpdk/spdk_pid85148 00:20:36.015 Removing: /var/run/dpdk/spdk_pid85281 00:20:36.015 Removing: /var/run/dpdk/spdk_pid85522 00:20:36.015 Removing: /var/run/dpdk/spdk_pid86077 00:20:36.015 Removing: /var/run/dpdk/spdk_pid86238 00:20:36.015 Removing: /var/run/dpdk/spdk_pid86399 00:20:36.015 Removing: /var/run/dpdk/spdk_pid86497 00:20:36.015 Removing: /var/run/dpdk/spdk_pid86744 00:20:36.015 Removing: /var/run/dpdk/spdk_pid86853 00:20:36.015 Removing: /var/run/dpdk/spdk_pid87515 00:20:36.015 Removing: /var/run/dpdk/spdk_pid87556 00:20:36.015 Removing: /var/run/dpdk/spdk_pid87586 00:20:36.015 Removing: /var/run/dpdk/spdk_pid87831 00:20:36.015 Removing: /var/run/dpdk/spdk_pid87872 00:20:36.015 Removing: /var/run/dpdk/spdk_pid87907 00:20:36.015 Clean 00:20:36.015 killing process with pid 59788 00:20:36.015 killing process with pid 59797 00:20:36.015 06:45:31 -- common/autotest_common.sh@1446 -- # return 0 00:20:36.015 06:45:31 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:20:36.015 06:45:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.015 06:45:31 -- common/autotest_common.sh@10 -- # set +x 00:20:36.015 06:45:31 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:20:36.015 06:45:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.015 06:45:31 -- common/autotest_common.sh@10 -- # set +x 00:20:36.015 06:45:31 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:36.015 06:45:31 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:36.015 06:45:31 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:36.015 06:45:31 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:20:36.015 06:45:31 -- spdk/autotest.sh@383 -- # hostname 00:20:36.015 06:45:31 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:36.276 geninfo: WARNING: invalid characters removed from testname! 00:20:58.210 06:45:53 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:01.496 06:45:56 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:04.030 06:45:59 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:06.563 06:46:01 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:09.094 06:46:04 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:11.654 06:46:06 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:14.945 06:46:09 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:14.945 06:46:09 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:21:14.945 06:46:09 -- common/autotest_common.sh@1690 -- $ lcov --version 00:21:14.945 06:46:09 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:21:14.945 06:46:09 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:21:14.945 06:46:09 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:21:14.945 06:46:09 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:21:14.945 06:46:09 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:21:14.945 06:46:09 -- scripts/common.sh@335 -- $ IFS=.-: 00:21:14.945 06:46:09 -- scripts/common.sh@335 -- $ read -ra ver1 00:21:14.945 06:46:09 -- scripts/common.sh@336 -- $ IFS=.-: 00:21:14.945 06:46:09 -- scripts/common.sh@336 -- $ read -ra ver2 00:21:14.945 06:46:09 -- scripts/common.sh@337 -- $ local 'op=<' 00:21:14.945 06:46:09 -- scripts/common.sh@339 -- $ ver1_l=2 00:21:14.945 06:46:09 -- scripts/common.sh@340 -- $ ver2_l=1 00:21:14.945 06:46:09 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:21:14.945 06:46:09 -- scripts/common.sh@343 -- $ case "$op" in 00:21:14.945 06:46:09 -- scripts/common.sh@344 -- $ : 1 00:21:14.945 06:46:09 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:21:14.945 06:46:09 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.945 06:46:09 -- scripts/common.sh@364 -- $ decimal 1 00:21:14.945 06:46:09 -- scripts/common.sh@352 -- $ local d=1 00:21:14.945 06:46:09 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:21:14.945 06:46:09 -- scripts/common.sh@354 -- $ echo 1 00:21:14.945 06:46:09 -- scripts/common.sh@364 -- $ ver1[v]=1 00:21:14.945 06:46:09 -- scripts/common.sh@365 -- $ decimal 2 00:21:14.945 06:46:09 -- scripts/common.sh@352 -- $ local d=2 00:21:14.945 06:46:09 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:21:14.945 06:46:09 -- scripts/common.sh@354 -- $ echo 2 00:21:14.945 06:46:09 -- scripts/common.sh@365 -- $ ver2[v]=2 00:21:14.945 06:46:09 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:21:14.945 06:46:09 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:21:14.945 06:46:09 -- scripts/common.sh@367 -- $ return 0 00:21:14.945 06:46:09 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.945 06:46:09 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:21:14.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.945 --rc genhtml_branch_coverage=1 00:21:14.945 --rc genhtml_function_coverage=1 00:21:14.945 --rc genhtml_legend=1 00:21:14.945 --rc geninfo_all_blocks=1 00:21:14.945 --rc geninfo_unexecuted_blocks=1 00:21:14.945 00:21:14.945 ' 00:21:14.945 06:46:09 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:21:14.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.946 --rc genhtml_branch_coverage=1 00:21:14.946 --rc genhtml_function_coverage=1 00:21:14.946 --rc genhtml_legend=1 00:21:14.946 --rc geninfo_all_blocks=1 00:21:14.946 --rc geninfo_unexecuted_blocks=1 00:21:14.946 00:21:14.946 ' 00:21:14.946 06:46:09 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:21:14.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.946 --rc genhtml_branch_coverage=1 00:21:14.946 --rc genhtml_function_coverage=1 00:21:14.946 --rc genhtml_legend=1 00:21:14.946 --rc geninfo_all_blocks=1 00:21:14.946 --rc geninfo_unexecuted_blocks=1 00:21:14.946 00:21:14.946 ' 00:21:14.946 06:46:09 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:21:14.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.946 --rc genhtml_branch_coverage=1 00:21:14.946 --rc genhtml_function_coverage=1 00:21:14.946 --rc genhtml_legend=1 00:21:14.946 --rc geninfo_all_blocks=1 00:21:14.946 --rc geninfo_unexecuted_blocks=1 00:21:14.946 00:21:14.946 ' 00:21:14.946 06:46:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:14.946 06:46:09 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:14.946 06:46:09 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.946 06:46:09 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.946 06:46:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.946 06:46:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.946 06:46:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.946 06:46:09 -- paths/export.sh@5 -- $ export PATH 00:21:14.946 06:46:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.946 06:46:09 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:14.946 06:46:09 -- common/autobuild_common.sh@440 -- $ date +%s 00:21:14.946 06:46:09 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733381169.XXXXXX 00:21:14.946 06:46:09 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733381169.qy7Wig 00:21:14.946 06:46:09 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:21:14.946 06:46:09 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:21:14.946 06:46:09 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:21:14.946 06:46:09 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:21:14.946 06:46:09 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:14.946 06:46:09 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:14.946 06:46:09 -- common/autobuild_common.sh@456 -- $ get_config_params 00:21:14.946 06:46:09 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:21:14.946 06:46:09 -- common/autotest_common.sh@10 -- $ set +x 00:21:14.946 06:46:09 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:21:14.946 06:46:09 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:21:14.946 06:46:09 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:21:14.946 06:46:09 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:21:14.946 06:46:09 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:21:14.946 06:46:09 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:21:14.946 06:46:09 -- spdk/autopackage.sh@19 -- $ timing_finish 00:21:14.946 06:46:09 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:14.946 06:46:09 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:14.946 06:46:09 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:14.946 06:46:09 -- spdk/autopackage.sh@20 -- $ exit 0 00:21:14.946 + [[ -n 5977 ]] 00:21:14.946 + sudo kill 5977 00:21:14.956 [Pipeline] } 00:21:14.971 [Pipeline] // timeout 00:21:14.977 [Pipeline] } 00:21:14.991 [Pipeline] // stage 00:21:14.996 [Pipeline] } 00:21:15.012 [Pipeline] // catchError 00:21:15.023 [Pipeline] stage 00:21:15.026 [Pipeline] { (Stop VM) 00:21:15.044 [Pipeline] sh 00:21:15.328 + vagrant halt 00:21:19.525 ==> default: Halting domain... 00:21:24.818 [Pipeline] sh 00:21:25.097 + vagrant destroy -f 00:21:28.398 ==> default: Removing domain... 00:21:28.411 [Pipeline] sh 00:21:28.739 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:21:28.747 [Pipeline] } 00:21:28.762 [Pipeline] // stage 00:21:28.767 [Pipeline] } 00:21:28.781 [Pipeline] // dir 00:21:28.787 [Pipeline] } 00:21:28.806 [Pipeline] // wrap 00:21:28.811 [Pipeline] } 00:21:28.822 [Pipeline] // catchError 00:21:28.829 [Pipeline] stage 00:21:28.831 [Pipeline] { (Epilogue) 00:21:28.842 [Pipeline] sh 00:21:29.160 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:34.442 [Pipeline] catchError 00:21:34.444 [Pipeline] { 00:21:34.457 [Pipeline] sh 00:21:34.738 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:34.996 Artifacts sizes are good 00:21:35.005 [Pipeline] } 00:21:35.019 [Pipeline] // catchError 00:21:35.032 [Pipeline] archiveArtifacts 00:21:35.039 Archiving artifacts 00:21:35.153 [Pipeline] cleanWs 00:21:35.165 [WS-CLEANUP] Deleting project workspace... 00:21:35.165 [WS-CLEANUP] Deferred wipeout is used... 00:21:35.170 [WS-CLEANUP] done 00:21:35.172 [Pipeline] } 00:21:35.184 [Pipeline] // stage 00:21:35.188 [Pipeline] } 00:21:35.197 [Pipeline] // node 00:21:35.201 [Pipeline] End of Pipeline 00:21:35.228 Finished: SUCCESS