00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 1011 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3678 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.068 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:05.624 The recommended git tool is: git 00:00:05.625 using credential 00000000-0000-0000-0000-000000000002 00:00:05.627 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:05.639 Fetching changes from the remote Git repository 00:00:05.642 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:05.654 Using shallow fetch with depth 1 00:00:05.654 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:05.654 > git --version # timeout=10 00:00:05.664 > git --version # 'git version 2.39.2' 00:00:05.664 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:05.676 Setting http proxy: proxy-dmz.intel.com:911 00:00:05.676 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:10.909 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.922 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.937 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:10.937 > git config core.sparsecheckout # timeout=10 00:00:10.949 > git read-tree -mu HEAD # timeout=10 00:00:10.969 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:10.994 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:10.994 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:11.119 [Pipeline] Start of Pipeline 00:00:11.131 [Pipeline] library 00:00:11.132 Loading library shm_lib@master 00:00:11.132 Library shm_lib@master is cached. Copying from home. 00:00:11.149 [Pipeline] node 00:00:11.156 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:11.159 [Pipeline] { 00:00:11.166 [Pipeline] catchError 00:00:11.167 [Pipeline] { 00:00:11.179 [Pipeline] wrap 00:00:11.188 [Pipeline] { 00:00:11.198 [Pipeline] stage 00:00:11.199 [Pipeline] { (Prologue) 00:00:11.216 [Pipeline] echo 00:00:11.217 Node: VM-host-SM17 00:00:11.223 [Pipeline] cleanWs 00:00:11.232 [WS-CLEANUP] Deleting project workspace... 00:00:11.232 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.238 [WS-CLEANUP] done 00:00:11.539 [Pipeline] setCustomBuildProperty 00:00:11.661 [Pipeline] httpRequest 00:00:12.274 [Pipeline] echo 00:00:12.275 Sorcerer 10.211.164.20 is alive 00:00:12.283 [Pipeline] retry 00:00:12.284 [Pipeline] { 00:00:12.293 [Pipeline] httpRequest 00:00:12.296 HttpMethod: GET 00:00:12.297 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.298 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.300 Response Code: HTTP/1.1 200 OK 00:00:12.301 Success: Status code 200 is in the accepted range: 200,404 00:00:12.301 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:39.584 [Pipeline] } 00:00:39.602 [Pipeline] // retry 00:00:39.610 [Pipeline] sh 00:00:39.893 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:39.907 [Pipeline] httpRequest 00:00:40.355 [Pipeline] echo 00:00:40.357 Sorcerer 10.211.164.20 is alive 00:00:40.366 [Pipeline] retry 00:00:40.368 [Pipeline] { 00:00:40.382 [Pipeline] httpRequest 00:00:40.387 HttpMethod: GET 00:00:40.388 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:40.388 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:40.393 Response Code: HTTP/1.1 200 OK 00:00:40.394 Success: Status code 200 is in the accepted range: 200,404 00:00:40.394 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:04:31.433 [Pipeline] } 00:04:31.454 [Pipeline] // retry 00:04:31.462 [Pipeline] sh 00:04:31.743 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:04:35.042 [Pipeline] sh 00:04:35.321 + git -C spdk log --oneline -n5 00:04:35.321 c13c99a5e test: Various fixes for Fedora40 00:04:35.321 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:04:35.321 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:04:35.321 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:04:35.321 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:04:35.340 [Pipeline] withCredentials 00:04:35.352 > git --version # timeout=10 00:04:35.369 > git --version # 'git version 2.39.2' 00:04:35.382 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:04:35.385 [Pipeline] { 00:04:35.396 [Pipeline] retry 00:04:35.398 [Pipeline] { 00:04:35.414 [Pipeline] sh 00:04:35.692 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:04:35.702 [Pipeline] } 00:04:35.720 [Pipeline] // retry 00:04:35.725 [Pipeline] } 00:04:35.741 [Pipeline] // withCredentials 00:04:35.751 [Pipeline] httpRequest 00:04:36.396 [Pipeline] echo 00:04:36.398 Sorcerer 10.211.164.20 is alive 00:04:36.409 [Pipeline] retry 00:04:36.412 [Pipeline] { 00:04:36.427 [Pipeline] httpRequest 00:04:36.431 HttpMethod: GET 00:04:36.432 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:36.433 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:36.434 Response Code: HTTP/1.1 200 OK 00:04:36.434 Success: Status code 200 is in the accepted range: 200,404 00:04:36.435 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:43.571 [Pipeline] } 00:04:43.586 [Pipeline] // retry 00:04:43.594 [Pipeline] sh 00:04:43.999 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:45.915 [Pipeline] sh 00:04:46.196 + git -C dpdk log --oneline -n5 00:04:46.196 caf0f5d395 version: 22.11.4 00:04:46.196 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:04:46.196 dc9c799c7d vhost: fix missing spinlock unlock 00:04:46.196 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:04:46.196 6ef77f2a5e net/gve: fix RX buffer size alignment 00:04:46.215 [Pipeline] writeFile 00:04:46.233 [Pipeline] sh 00:04:46.515 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:46.527 [Pipeline] sh 00:04:46.807 + cat autorun-spdk.conf 00:04:46.807 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:46.807 SPDK_TEST_NVMF=1 00:04:46.807 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:46.808 SPDK_TEST_URING=1 00:04:46.808 SPDK_TEST_USDT=1 00:04:46.808 SPDK_RUN_UBSAN=1 00:04:46.808 NET_TYPE=virt 00:04:46.808 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:04:46.808 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:46.808 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:46.815 RUN_NIGHTLY=1 00:04:46.817 [Pipeline] } 00:04:46.831 [Pipeline] // stage 00:04:46.846 [Pipeline] stage 00:04:46.848 [Pipeline] { (Run VM) 00:04:46.860 [Pipeline] sh 00:04:47.140 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:47.140 + echo 'Start stage prepare_nvme.sh' 00:04:47.140 Start stage prepare_nvme.sh 00:04:47.140 + [[ -n 4 ]] 00:04:47.140 + disk_prefix=ex4 00:04:47.140 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:04:47.140 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:04:47.140 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:04:47.140 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:47.140 ++ SPDK_TEST_NVMF=1 00:04:47.140 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:47.140 ++ SPDK_TEST_URING=1 00:04:47.140 ++ SPDK_TEST_USDT=1 00:04:47.140 ++ SPDK_RUN_UBSAN=1 00:04:47.140 ++ NET_TYPE=virt 00:04:47.140 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:04:47.140 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:47.140 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:47.140 ++ RUN_NIGHTLY=1 00:04:47.140 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:04:47.140 + nvme_files=() 00:04:47.140 + declare -A nvme_files 00:04:47.140 + backend_dir=/var/lib/libvirt/images/backends 00:04:47.140 + nvme_files['nvme.img']=5G 00:04:47.140 + nvme_files['nvme-cmb.img']=5G 00:04:47.140 + nvme_files['nvme-multi0.img']=4G 00:04:47.140 + nvme_files['nvme-multi1.img']=4G 00:04:47.140 + nvme_files['nvme-multi2.img']=4G 00:04:47.140 + nvme_files['nvme-openstack.img']=8G 00:04:47.140 + nvme_files['nvme-zns.img']=5G 00:04:47.141 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:47.141 + (( SPDK_TEST_FTL == 1 )) 00:04:47.141 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:47.141 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:47.141 + for nvme in "${!nvme_files[@]}" 00:04:47.141 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:04:47.141 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:47.141 + for nvme in "${!nvme_files[@]}" 00:04:47.141 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:04:47.141 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:47.141 + for nvme in "${!nvme_files[@]}" 00:04:47.141 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:04:47.141 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:47.141 + for nvme in "${!nvme_files[@]}" 00:04:47.141 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:04:47.141 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:47.141 + for nvme in "${!nvme_files[@]}" 00:04:47.141 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:04:47.141 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:47.141 + for nvme in "${!nvme_files[@]}" 00:04:47.141 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:04:47.141 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:47.141 + for nvme in "${!nvme_files[@]}" 00:04:47.141 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:04:47.141 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:47.141 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:04:47.141 + echo 'End stage prepare_nvme.sh' 00:04:47.141 End stage prepare_nvme.sh 00:04:47.153 [Pipeline] sh 00:04:47.435 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:47.435 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:04:47.435 00:04:47.435 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:04:47.435 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:04:47.435 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:04:47.435 HELP=0 00:04:47.435 DRY_RUN=0 00:04:47.435 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:04:47.435 NVME_DISKS_TYPE=nvme,nvme, 00:04:47.435 NVME_AUTO_CREATE=0 00:04:47.435 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:04:47.435 NVME_CMB=,, 00:04:47.435 NVME_PMR=,, 00:04:47.435 NVME_ZNS=,, 00:04:47.435 NVME_MS=,, 00:04:47.435 NVME_FDP=,, 00:04:47.435 SPDK_VAGRANT_DISTRO=fedora39 00:04:47.435 SPDK_VAGRANT_VMCPU=10 00:04:47.435 SPDK_VAGRANT_VMRAM=12288 00:04:47.435 SPDK_VAGRANT_PROVIDER=libvirt 00:04:47.435 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:47.435 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:47.435 SPDK_OPENSTACK_NETWORK=0 00:04:47.435 VAGRANT_PACKAGE_BOX=0 00:04:47.435 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:04:47.435 FORCE_DISTRO=true 00:04:47.435 VAGRANT_BOX_VERSION= 00:04:47.435 EXTRA_VAGRANTFILES= 00:04:47.435 NIC_MODEL=e1000 00:04:47.435 00:04:47.435 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt' 00:04:47.435 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:04:50.722 Bringing machine 'default' up with 'libvirt' provider... 00:04:50.979 ==> default: Creating image (snapshot of base box volume). 00:04:50.979 ==> default: Creating domain with the following settings... 00:04:50.979 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732880936_9121eb2f518f3bd78690 00:04:50.979 ==> default: -- Domain type: kvm 00:04:50.979 ==> default: -- Cpus: 10 00:04:50.979 ==> default: -- Feature: acpi 00:04:50.979 ==> default: -- Feature: apic 00:04:50.979 ==> default: -- Feature: pae 00:04:50.979 ==> default: -- Memory: 12288M 00:04:50.979 ==> default: -- Memory Backing: hugepages: 00:04:50.979 ==> default: -- Management MAC: 00:04:50.979 ==> default: -- Loader: 00:04:50.979 ==> default: -- Nvram: 00:04:50.979 ==> default: -- Base box: spdk/fedora39 00:04:50.979 ==> default: -- Storage pool: default 00:04:50.979 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732880936_9121eb2f518f3bd78690.img (20G) 00:04:50.979 ==> default: -- Volume Cache: default 00:04:50.979 ==> default: -- Kernel: 00:04:50.979 ==> default: -- Initrd: 00:04:50.979 ==> default: -- Graphics Type: vnc 00:04:50.979 ==> default: -- Graphics Port: -1 00:04:50.979 ==> default: -- Graphics IP: 127.0.0.1 00:04:50.979 ==> default: -- Graphics Password: Not defined 00:04:50.979 ==> default: -- Video Type: cirrus 00:04:50.979 ==> default: -- Video VRAM: 9216 00:04:50.979 ==> default: -- Sound Type: 00:04:50.979 ==> default: -- Keymap: en-us 00:04:50.979 ==> default: -- TPM Path: 00:04:50.979 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:50.979 ==> default: -- Command line args: 00:04:50.979 ==> default: -> value=-device, 00:04:50.980 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:04:50.980 ==> default: -> value=-drive, 00:04:50.980 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:04:50.980 ==> default: -> value=-device, 00:04:50.980 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:50.980 ==> default: -> value=-device, 00:04:50.980 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:04:50.980 ==> default: -> value=-drive, 00:04:50.980 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:04:50.980 ==> default: -> value=-device, 00:04:50.980 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:50.980 ==> default: -> value=-drive, 00:04:50.980 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:04:50.980 ==> default: -> value=-device, 00:04:50.980 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:50.980 ==> default: -> value=-drive, 00:04:50.980 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:04:50.980 ==> default: -> value=-device, 00:04:50.980 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:51.237 ==> default: Creating shared folders metadata... 00:04:51.237 ==> default: Starting domain. 00:04:52.609 ==> default: Waiting for domain to get an IP address... 00:05:10.741 ==> default: Waiting for SSH to become available... 00:05:10.741 ==> default: Configuring and enabling network interfaces... 00:05:13.273 default: SSH address: 192.168.121.205:22 00:05:13.273 default: SSH username: vagrant 00:05:13.273 default: SSH auth method: private key 00:05:15.804 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:23.914 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:05:29.239 ==> default: Mounting SSHFS shared folder... 00:05:30.612 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:30.612 ==> default: Checking Mount.. 00:05:31.990 ==> default: Folder Successfully Mounted! 00:05:31.990 ==> default: Running provisioner: file... 00:05:32.924 default: ~/.gitconfig => .gitconfig 00:05:33.491 00:05:33.491 SUCCESS! 00:05:33.491 00:05:33.491 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:05:33.491 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:33.491 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:05:33.491 00:05:33.499 [Pipeline] } 00:05:33.516 [Pipeline] // stage 00:05:33.527 [Pipeline] dir 00:05:33.527 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt 00:05:33.529 [Pipeline] { 00:05:33.543 [Pipeline] catchError 00:05:33.545 [Pipeline] { 00:05:33.558 [Pipeline] sh 00:05:33.837 + + vagrant ssh-config --host vagrant 00:05:33.837 sed -ne /^Host/,$p 00:05:33.837 + tee ssh_conf 00:05:38.034 Host vagrant 00:05:38.034 HostName 192.168.121.205 00:05:38.035 User vagrant 00:05:38.035 Port 22 00:05:38.035 UserKnownHostsFile /dev/null 00:05:38.035 StrictHostKeyChecking no 00:05:38.035 PasswordAuthentication no 00:05:38.035 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:38.035 IdentitiesOnly yes 00:05:38.035 LogLevel FATAL 00:05:38.035 ForwardAgent yes 00:05:38.035 ForwardX11 yes 00:05:38.035 00:05:38.050 [Pipeline] withEnv 00:05:38.053 [Pipeline] { 00:05:38.070 [Pipeline] sh 00:05:38.351 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:38.351 source /etc/os-release 00:05:38.351 [[ -e /image.version ]] && img=$(< /image.version) 00:05:38.351 # Minimal, systemd-like check. 00:05:38.351 if [[ -e /.dockerenv ]]; then 00:05:38.351 # Clear garbage from the node's name: 00:05:38.351 # agt-er_autotest_547-896 -> autotest_547-896 00:05:38.351 # $HOSTNAME is the actual container id 00:05:38.351 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:38.351 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:38.351 # We can assume this is a mount from a host where container is running, 00:05:38.351 # so fetch its hostname to easily identify the target swarm worker. 00:05:38.351 container="$(< /etc/hostname) ($agent)" 00:05:38.351 else 00:05:38.351 # Fallback 00:05:38.351 container=$agent 00:05:38.351 fi 00:05:38.351 fi 00:05:38.351 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:38.351 00:05:38.621 [Pipeline] } 00:05:38.638 [Pipeline] // withEnv 00:05:38.649 [Pipeline] setCustomBuildProperty 00:05:38.670 [Pipeline] stage 00:05:38.673 [Pipeline] { (Tests) 00:05:38.693 [Pipeline] sh 00:05:38.971 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:39.244 [Pipeline] sh 00:05:39.531 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:39.547 [Pipeline] timeout 00:05:39.547 Timeout set to expire in 1 hr 0 min 00:05:39.549 [Pipeline] { 00:05:39.566 [Pipeline] sh 00:05:39.846 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:40.414 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:05:40.427 [Pipeline] sh 00:05:40.707 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:40.979 [Pipeline] sh 00:05:41.300 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:41.315 [Pipeline] sh 00:05:41.594 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:05:41.853 ++ readlink -f spdk_repo 00:05:41.853 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:41.853 + [[ -n /home/vagrant/spdk_repo ]] 00:05:41.853 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:41.853 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:41.853 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:41.853 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:41.853 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:41.853 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:05:41.853 + cd /home/vagrant/spdk_repo 00:05:41.853 + source /etc/os-release 00:05:41.853 ++ NAME='Fedora Linux' 00:05:41.853 ++ VERSION='39 (Cloud Edition)' 00:05:41.853 ++ ID=fedora 00:05:41.853 ++ VERSION_ID=39 00:05:41.853 ++ VERSION_CODENAME= 00:05:41.853 ++ PLATFORM_ID=platform:f39 00:05:41.853 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:41.853 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:41.853 ++ LOGO=fedora-logo-icon 00:05:41.853 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:41.853 ++ HOME_URL=https://fedoraproject.org/ 00:05:41.853 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:41.853 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:41.853 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:41.853 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:41.853 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:41.853 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:41.853 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:41.853 ++ SUPPORT_END=2024-11-12 00:05:41.853 ++ VARIANT='Cloud Edition' 00:05:41.853 ++ VARIANT_ID=cloud 00:05:41.853 + uname -a 00:05:41.853 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:41.853 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:41.853 Hugepages 00:05:41.853 node hugesize free / total 00:05:41.853 node0 1048576kB 0 / 0 00:05:41.853 node0 2048kB 0 / 0 00:05:41.853 00:05:41.853 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:41.853 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:41.853 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:41.853 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:41.853 + rm -f /tmp/spdk-ld-path 00:05:41.853 + source autorun-spdk.conf 00:05:41.853 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:41.853 ++ SPDK_TEST_NVMF=1 00:05:41.853 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:41.853 ++ SPDK_TEST_URING=1 00:05:41.853 ++ SPDK_TEST_USDT=1 00:05:41.853 ++ SPDK_RUN_UBSAN=1 00:05:41.853 ++ NET_TYPE=virt 00:05:41.853 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:05:41.853 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:05:41.853 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:41.853 ++ RUN_NIGHTLY=1 00:05:41.853 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:41.853 + [[ -n '' ]] 00:05:41.853 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:42.112 + for M in /var/spdk/build-*-manifest.txt 00:05:42.112 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:42.113 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:42.113 + for M in /var/spdk/build-*-manifest.txt 00:05:42.113 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:42.113 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:42.113 + for M in /var/spdk/build-*-manifest.txt 00:05:42.113 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:42.113 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:42.113 ++ uname 00:05:42.113 + [[ Linux == \L\i\n\u\x ]] 00:05:42.113 + sudo dmesg -T 00:05:42.113 + sudo dmesg --clear 00:05:42.113 + dmesg_pid=5915 00:05:42.113 + sudo dmesg -Tw 00:05:42.113 + [[ Fedora Linux == FreeBSD ]] 00:05:42.113 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:42.113 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:42.113 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:42.113 + [[ -x /usr/src/fio-static/fio ]] 00:05:42.113 + export FIO_BIN=/usr/src/fio-static/fio 00:05:42.113 + FIO_BIN=/usr/src/fio-static/fio 00:05:42.113 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:42.113 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:42.113 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:42.113 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:42.113 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:42.113 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:42.113 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:42.113 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:42.113 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:42.113 Test configuration: 00:05:42.113 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:42.113 SPDK_TEST_NVMF=1 00:05:42.113 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:42.113 SPDK_TEST_URING=1 00:05:42.113 SPDK_TEST_USDT=1 00:05:42.113 SPDK_RUN_UBSAN=1 00:05:42.113 NET_TYPE=virt 00:05:42.113 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:05:42.113 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:05:42.113 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:42.113 RUN_NIGHTLY=1 11:49:47 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:05:42.113 11:49:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:42.113 11:49:47 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:42.113 11:49:47 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.113 11:49:47 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.113 11:49:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.113 11:49:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.113 11:49:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.113 11:49:47 -- paths/export.sh@5 -- $ export PATH 00:05:42.113 11:49:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.113 11:49:47 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:42.113 11:49:47 -- common/autobuild_common.sh@440 -- $ date +%s 00:05:42.113 11:49:47 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732880987.XXXXXX 00:05:42.113 11:49:47 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732880987.rShV55 00:05:42.113 11:49:47 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:05:42.113 11:49:47 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:05:42.113 11:49:47 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:05:42.113 11:49:47 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:05:42.113 11:49:47 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:42.113 11:49:47 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:42.113 11:49:47 -- common/autobuild_common.sh@456 -- $ get_config_params 00:05:42.113 11:49:47 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:05:42.113 11:49:47 -- common/autotest_common.sh@10 -- $ set +x 00:05:42.113 11:49:47 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:05:42.113 11:49:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:42.113 11:49:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:42.113 11:49:47 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:42.113 11:49:47 -- spdk/autobuild.sh@16 -- $ date -u 00:05:42.113 Fri Nov 29 11:49:47 AM UTC 2024 00:05:42.113 11:49:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:42.113 LTS-67-gc13c99a5e 00:05:42.113 11:49:47 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:42.113 11:49:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:42.113 11:49:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:42.113 11:49:47 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:05:42.113 11:49:47 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:05:42.113 11:49:47 -- common/autotest_common.sh@10 -- $ set +x 00:05:42.373 ************************************ 00:05:42.373 START TEST ubsan 00:05:42.373 ************************************ 00:05:42.373 using ubsan 00:05:42.373 11:49:47 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:05:42.373 00:05:42.373 real 0m0.000s 00:05:42.373 user 0m0.000s 00:05:42.373 sys 0m0.000s 00:05:42.373 11:49:47 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:05:42.373 11:49:47 -- common/autotest_common.sh@10 -- $ set +x 00:05:42.373 ************************************ 00:05:42.373 END TEST ubsan 00:05:42.373 ************************************ 00:05:42.373 11:49:47 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:05:42.373 11:49:47 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:05:42.373 11:49:47 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:05:42.373 11:49:47 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:05:42.373 11:49:47 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:05:42.373 11:49:47 -- common/autotest_common.sh@10 -- $ set +x 00:05:42.373 ************************************ 00:05:42.373 START TEST build_native_dpdk 00:05:42.373 ************************************ 00:05:42.374 11:49:47 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:05:42.374 11:49:47 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:05:42.374 11:49:47 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:05:42.374 11:49:47 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:05:42.374 11:49:47 -- common/autobuild_common.sh@51 -- $ local compiler 00:05:42.374 11:49:47 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:05:42.374 11:49:47 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:05:42.374 11:49:47 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:05:42.374 11:49:47 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:05:42.374 11:49:47 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:05:42.374 11:49:47 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:05:42.374 11:49:47 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:05:42.374 11:49:47 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:05:42.374 11:49:47 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:05:42.374 11:49:47 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:05:42.374 11:49:47 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:05:42.374 11:49:47 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:05:42.374 11:49:47 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:05:42.374 11:49:47 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:05:42.374 11:49:47 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:05:42.374 11:49:47 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:05:42.374 caf0f5d395 version: 22.11.4 00:05:42.374 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:05:42.374 dc9c799c7d vhost: fix missing spinlock unlock 00:05:42.374 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:05:42.374 6ef77f2a5e net/gve: fix RX buffer size alignment 00:05:42.374 11:49:47 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:05:42.374 11:49:47 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:05:42.374 11:49:47 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:05:42.374 11:49:47 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:05:42.374 11:49:47 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:05:42.374 11:49:47 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:05:42.374 11:49:47 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:05:42.374 11:49:47 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:05:42.374 11:49:47 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:05:42.374 11:49:47 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:05:42.374 11:49:47 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:05:42.374 11:49:47 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:05:42.374 11:49:47 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:05:42.374 11:49:47 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:05:42.374 11:49:47 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:05:42.374 11:49:47 -- common/autobuild_common.sh@168 -- $ uname -s 00:05:42.374 11:49:47 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:05:42.374 11:49:47 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:05:42.374 11:49:47 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:05:42.374 11:49:47 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:05:42.374 11:49:47 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:05:42.374 11:49:47 -- scripts/common.sh@335 -- $ IFS=.-: 00:05:42.374 11:49:47 -- scripts/common.sh@335 -- $ read -ra ver1 00:05:42.374 11:49:47 -- scripts/common.sh@336 -- $ IFS=.-: 00:05:42.374 11:49:47 -- scripts/common.sh@336 -- $ read -ra ver2 00:05:42.374 11:49:47 -- scripts/common.sh@337 -- $ local 'op=<' 00:05:42.374 11:49:47 -- scripts/common.sh@339 -- $ ver1_l=3 00:05:42.374 11:49:47 -- scripts/common.sh@340 -- $ ver2_l=3 00:05:42.374 11:49:47 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:05:42.374 11:49:47 -- scripts/common.sh@343 -- $ case "$op" in 00:05:42.374 11:49:47 -- scripts/common.sh@344 -- $ : 1 00:05:42.374 11:49:47 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:05:42.374 11:49:47 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.374 11:49:47 -- scripts/common.sh@364 -- $ decimal 22 00:05:42.374 11:49:47 -- scripts/common.sh@352 -- $ local d=22 00:05:42.374 11:49:47 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:05:42.374 11:49:47 -- scripts/common.sh@354 -- $ echo 22 00:05:42.374 11:49:47 -- scripts/common.sh@364 -- $ ver1[v]=22 00:05:42.374 11:49:47 -- scripts/common.sh@365 -- $ decimal 21 00:05:42.374 11:49:47 -- scripts/common.sh@352 -- $ local d=21 00:05:42.374 11:49:47 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:05:42.374 11:49:47 -- scripts/common.sh@354 -- $ echo 21 00:05:42.374 11:49:47 -- scripts/common.sh@365 -- $ ver2[v]=21 00:05:42.374 11:49:47 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:05:42.374 11:49:47 -- scripts/common.sh@366 -- $ return 1 00:05:42.374 11:49:47 -- common/autobuild_common.sh@173 -- $ patch -p1 00:05:42.374 patching file config/rte_config.h 00:05:42.374 Hunk #1 succeeded at 60 (offset 1 line). 00:05:42.374 11:49:47 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:05:42.374 11:49:47 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:05:42.374 11:49:47 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:05:42.374 11:49:47 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:05:42.374 11:49:47 -- scripts/common.sh@335 -- $ IFS=.-: 00:05:42.374 11:49:47 -- scripts/common.sh@335 -- $ read -ra ver1 00:05:42.374 11:49:47 -- scripts/common.sh@336 -- $ IFS=.-: 00:05:42.374 11:49:47 -- scripts/common.sh@336 -- $ read -ra ver2 00:05:42.374 11:49:47 -- scripts/common.sh@337 -- $ local 'op=<' 00:05:42.374 11:49:47 -- scripts/common.sh@339 -- $ ver1_l=3 00:05:42.374 11:49:47 -- scripts/common.sh@340 -- $ ver2_l=3 00:05:42.374 11:49:47 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:05:42.374 11:49:47 -- scripts/common.sh@343 -- $ case "$op" in 00:05:42.374 11:49:47 -- scripts/common.sh@344 -- $ : 1 00:05:42.374 11:49:47 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:05:42.374 11:49:47 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.374 11:49:47 -- scripts/common.sh@364 -- $ decimal 22 00:05:42.374 11:49:47 -- scripts/common.sh@352 -- $ local d=22 00:05:42.374 11:49:47 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:05:42.374 11:49:47 -- scripts/common.sh@354 -- $ echo 22 00:05:42.374 11:49:47 -- scripts/common.sh@364 -- $ ver1[v]=22 00:05:42.374 11:49:47 -- scripts/common.sh@365 -- $ decimal 24 00:05:42.374 11:49:47 -- scripts/common.sh@352 -- $ local d=24 00:05:42.374 11:49:47 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:05:42.374 11:49:47 -- scripts/common.sh@354 -- $ echo 24 00:05:42.374 11:49:47 -- scripts/common.sh@365 -- $ ver2[v]=24 00:05:42.374 11:49:47 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:05:42.374 11:49:47 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:05:42.374 11:49:47 -- scripts/common.sh@367 -- $ return 0 00:05:42.375 11:49:47 -- common/autobuild_common.sh@177 -- $ patch -p1 00:05:42.375 patching file lib/pcapng/rte_pcapng.c 00:05:42.375 Hunk #1 succeeded at 110 (offset -18 lines). 00:05:42.375 11:49:47 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:05:42.375 11:49:47 -- common/autobuild_common.sh@181 -- $ uname -s 00:05:42.375 11:49:47 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:05:42.375 11:49:47 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:05:42.375 11:49:47 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:05:47.699 The Meson build system 00:05:47.699 Version: 1.5.0 00:05:47.699 Source dir: /home/vagrant/spdk_repo/dpdk 00:05:47.699 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:05:47.699 Build type: native build 00:05:47.699 Program cat found: YES (/usr/bin/cat) 00:05:47.699 Project name: DPDK 00:05:47.699 Project version: 22.11.4 00:05:47.699 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:47.699 C linker for the host machine: gcc ld.bfd 2.40-14 00:05:47.699 Host machine cpu family: x86_64 00:05:47.699 Host machine cpu: x86_64 00:05:47.699 Message: ## Building in Developer Mode ## 00:05:47.699 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:47.699 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:05:47.699 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:05:47.699 Program objdump found: YES (/usr/bin/objdump) 00:05:47.699 Program python3 found: YES (/usr/bin/python3) 00:05:47.699 Program cat found: YES (/usr/bin/cat) 00:05:47.699 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:05:47.699 Checking for size of "void *" : 8 00:05:47.699 Checking for size of "void *" : 8 (cached) 00:05:47.699 Library m found: YES 00:05:47.699 Library numa found: YES 00:05:47.699 Has header "numaif.h" : YES 00:05:47.699 Library fdt found: NO 00:05:47.699 Library execinfo found: NO 00:05:47.699 Has header "execinfo.h" : YES 00:05:47.699 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:47.699 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:47.699 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:47.699 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:47.699 Run-time dependency openssl found: YES 3.1.1 00:05:47.699 Run-time dependency libpcap found: YES 1.10.4 00:05:47.699 Has header "pcap.h" with dependency libpcap: YES 00:05:47.699 Compiler for C supports arguments -Wcast-qual: YES 00:05:47.699 Compiler for C supports arguments -Wdeprecated: YES 00:05:47.699 Compiler for C supports arguments -Wformat: YES 00:05:47.699 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:47.699 Compiler for C supports arguments -Wformat-security: NO 00:05:47.699 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:47.699 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:47.699 Compiler for C supports arguments -Wnested-externs: YES 00:05:47.699 Compiler for C supports arguments -Wold-style-definition: YES 00:05:47.699 Compiler for C supports arguments -Wpointer-arith: YES 00:05:47.699 Compiler for C supports arguments -Wsign-compare: YES 00:05:47.699 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:47.699 Compiler for C supports arguments -Wundef: YES 00:05:47.699 Compiler for C supports arguments -Wwrite-strings: YES 00:05:47.699 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:47.699 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:47.699 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:47.699 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:47.699 Compiler for C supports arguments -mavx512f: YES 00:05:47.699 Checking if "AVX512 checking" compiles: YES 00:05:47.699 Fetching value of define "__SSE4_2__" : 1 00:05:47.699 Fetching value of define "__AES__" : 1 00:05:47.699 Fetching value of define "__AVX__" : 1 00:05:47.699 Fetching value of define "__AVX2__" : 1 00:05:47.699 Fetching value of define "__AVX512BW__" : (undefined) 00:05:47.699 Fetching value of define "__AVX512CD__" : (undefined) 00:05:47.699 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:47.699 Fetching value of define "__AVX512F__" : (undefined) 00:05:47.699 Fetching value of define "__AVX512VL__" : (undefined) 00:05:47.699 Fetching value of define "__PCLMUL__" : 1 00:05:47.699 Fetching value of define "__RDRND__" : 1 00:05:47.699 Fetching value of define "__RDSEED__" : 1 00:05:47.699 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:47.699 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:47.699 Message: lib/kvargs: Defining dependency "kvargs" 00:05:47.699 Message: lib/telemetry: Defining dependency "telemetry" 00:05:47.699 Checking for function "getentropy" : YES 00:05:47.699 Message: lib/eal: Defining dependency "eal" 00:05:47.699 Message: lib/ring: Defining dependency "ring" 00:05:47.699 Message: lib/rcu: Defining dependency "rcu" 00:05:47.699 Message: lib/mempool: Defining dependency "mempool" 00:05:47.699 Message: lib/mbuf: Defining dependency "mbuf" 00:05:47.699 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:47.699 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:47.699 Compiler for C supports arguments -mpclmul: YES 00:05:47.699 Compiler for C supports arguments -maes: YES 00:05:47.699 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:47.699 Compiler for C supports arguments -mavx512bw: YES 00:05:47.699 Compiler for C supports arguments -mavx512dq: YES 00:05:47.699 Compiler for C supports arguments -mavx512vl: YES 00:05:47.699 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:47.699 Compiler for C supports arguments -mavx2: YES 00:05:47.699 Compiler for C supports arguments -mavx: YES 00:05:47.699 Message: lib/net: Defining dependency "net" 00:05:47.699 Message: lib/meter: Defining dependency "meter" 00:05:47.699 Message: lib/ethdev: Defining dependency "ethdev" 00:05:47.699 Message: lib/pci: Defining dependency "pci" 00:05:47.699 Message: lib/cmdline: Defining dependency "cmdline" 00:05:47.699 Message: lib/metrics: Defining dependency "metrics" 00:05:47.699 Message: lib/hash: Defining dependency "hash" 00:05:47.699 Message: lib/timer: Defining dependency "timer" 00:05:47.699 Fetching value of define "__AVX2__" : 1 (cached) 00:05:47.699 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:47.699 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:05:47.699 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:05:47.699 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:05:47.699 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:05:47.699 Message: lib/acl: Defining dependency "acl" 00:05:47.699 Message: lib/bbdev: Defining dependency "bbdev" 00:05:47.699 Message: lib/bitratestats: Defining dependency "bitratestats" 00:05:47.699 Run-time dependency libelf found: YES 0.191 00:05:47.699 Message: lib/bpf: Defining dependency "bpf" 00:05:47.699 Message: lib/cfgfile: Defining dependency "cfgfile" 00:05:47.699 Message: lib/compressdev: Defining dependency "compressdev" 00:05:47.699 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:47.699 Message: lib/distributor: Defining dependency "distributor" 00:05:47.699 Message: lib/efd: Defining dependency "efd" 00:05:47.699 Message: lib/eventdev: Defining dependency "eventdev" 00:05:47.699 Message: lib/gpudev: Defining dependency "gpudev" 00:05:47.699 Message: lib/gro: Defining dependency "gro" 00:05:47.699 Message: lib/gso: Defining dependency "gso" 00:05:47.699 Message: lib/ip_frag: Defining dependency "ip_frag" 00:05:47.699 Message: lib/jobstats: Defining dependency "jobstats" 00:05:47.699 Message: lib/latencystats: Defining dependency "latencystats" 00:05:47.699 Message: lib/lpm: Defining dependency "lpm" 00:05:47.699 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:47.699 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:05:47.699 Fetching value of define "__AVX512IFMA__" : (undefined) 00:05:47.699 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:05:47.699 Message: lib/member: Defining dependency "member" 00:05:47.699 Message: lib/pcapng: Defining dependency "pcapng" 00:05:47.699 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:47.699 Message: lib/power: Defining dependency "power" 00:05:47.699 Message: lib/rawdev: Defining dependency "rawdev" 00:05:47.699 Message: lib/regexdev: Defining dependency "regexdev" 00:05:47.699 Message: lib/dmadev: Defining dependency "dmadev" 00:05:47.699 Message: lib/rib: Defining dependency "rib" 00:05:47.699 Message: lib/reorder: Defining dependency "reorder" 00:05:47.699 Message: lib/sched: Defining dependency "sched" 00:05:47.699 Message: lib/security: Defining dependency "security" 00:05:47.699 Message: lib/stack: Defining dependency "stack" 00:05:47.699 Has header "linux/userfaultfd.h" : YES 00:05:47.699 Message: lib/vhost: Defining dependency "vhost" 00:05:47.699 Message: lib/ipsec: Defining dependency "ipsec" 00:05:47.699 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:47.699 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:05:47.699 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:05:47.699 Compiler for C supports arguments -mavx512bw: YES (cached) 00:05:47.699 Message: lib/fib: Defining dependency "fib" 00:05:47.699 Message: lib/port: Defining dependency "port" 00:05:47.699 Message: lib/pdump: Defining dependency "pdump" 00:05:47.699 Message: lib/table: Defining dependency "table" 00:05:47.699 Message: lib/pipeline: Defining dependency "pipeline" 00:05:47.699 Message: lib/graph: Defining dependency "graph" 00:05:47.699 Message: lib/node: Defining dependency "node" 00:05:47.699 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:47.699 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:47.699 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:47.699 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:47.699 Compiler for C supports arguments -Wno-sign-compare: YES 00:05:47.699 Compiler for C supports arguments -Wno-unused-value: YES 00:05:47.699 Compiler for C supports arguments -Wno-format: YES 00:05:47.699 Compiler for C supports arguments -Wno-format-security: YES 00:05:47.699 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:05:49.600 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:05:49.600 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:05:49.600 Compiler for C supports arguments -Wno-unused-parameter: YES 00:05:49.600 Fetching value of define "__AVX2__" : 1 (cached) 00:05:49.600 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:49.600 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:49.600 Compiler for C supports arguments -mavx512bw: YES (cached) 00:05:49.600 Compiler for C supports arguments -march=skylake-avx512: YES 00:05:49.600 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:05:49.600 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:49.600 Configuring doxy-api.conf using configuration 00:05:49.600 Program sphinx-build found: NO 00:05:49.600 Configuring rte_build_config.h using configuration 00:05:49.600 Message: 00:05:49.600 ================= 00:05:49.600 Applications Enabled 00:05:49.600 ================= 00:05:49.600 00:05:49.600 apps: 00:05:49.600 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:05:49.600 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:05:49.600 test-security-perf, 00:05:49.600 00:05:49.600 Message: 00:05:49.600 ================= 00:05:49.600 Libraries Enabled 00:05:49.600 ================= 00:05:49.600 00:05:49.600 libs: 00:05:49.600 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:05:49.600 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:05:49.600 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:05:49.601 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:05:49.601 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:05:49.601 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:05:49.601 table, pipeline, graph, node, 00:05:49.601 00:05:49.601 Message: 00:05:49.601 =============== 00:05:49.601 Drivers Enabled 00:05:49.601 =============== 00:05:49.601 00:05:49.601 common: 00:05:49.601 00:05:49.601 bus: 00:05:49.601 pci, vdev, 00:05:49.601 mempool: 00:05:49.601 ring, 00:05:49.601 dma: 00:05:49.601 00:05:49.601 net: 00:05:49.601 i40e, 00:05:49.601 raw: 00:05:49.601 00:05:49.601 crypto: 00:05:49.601 00:05:49.601 compress: 00:05:49.601 00:05:49.601 regex: 00:05:49.601 00:05:49.601 vdpa: 00:05:49.601 00:05:49.601 event: 00:05:49.601 00:05:49.601 baseband: 00:05:49.601 00:05:49.601 gpu: 00:05:49.601 00:05:49.601 00:05:49.601 Message: 00:05:49.601 ================= 00:05:49.601 Content Skipped 00:05:49.601 ================= 00:05:49.601 00:05:49.601 apps: 00:05:49.601 00:05:49.601 libs: 00:05:49.601 kni: explicitly disabled via build config (deprecated lib) 00:05:49.601 flow_classify: explicitly disabled via build config (deprecated lib) 00:05:49.601 00:05:49.601 drivers: 00:05:49.601 common/cpt: not in enabled drivers build config 00:05:49.601 common/dpaax: not in enabled drivers build config 00:05:49.601 common/iavf: not in enabled drivers build config 00:05:49.601 common/idpf: not in enabled drivers build config 00:05:49.601 common/mvep: not in enabled drivers build config 00:05:49.601 common/octeontx: not in enabled drivers build config 00:05:49.601 bus/auxiliary: not in enabled drivers build config 00:05:49.601 bus/dpaa: not in enabled drivers build config 00:05:49.601 bus/fslmc: not in enabled drivers build config 00:05:49.601 bus/ifpga: not in enabled drivers build config 00:05:49.601 bus/vmbus: not in enabled drivers build config 00:05:49.601 common/cnxk: not in enabled drivers build config 00:05:49.601 common/mlx5: not in enabled drivers build config 00:05:49.601 common/qat: not in enabled drivers build config 00:05:49.601 common/sfc_efx: not in enabled drivers build config 00:05:49.601 mempool/bucket: not in enabled drivers build config 00:05:49.601 mempool/cnxk: not in enabled drivers build config 00:05:49.601 mempool/dpaa: not in enabled drivers build config 00:05:49.601 mempool/dpaa2: not in enabled drivers build config 00:05:49.601 mempool/octeontx: not in enabled drivers build config 00:05:49.601 mempool/stack: not in enabled drivers build config 00:05:49.601 dma/cnxk: not in enabled drivers build config 00:05:49.601 dma/dpaa: not in enabled drivers build config 00:05:49.601 dma/dpaa2: not in enabled drivers build config 00:05:49.601 dma/hisilicon: not in enabled drivers build config 00:05:49.601 dma/idxd: not in enabled drivers build config 00:05:49.601 dma/ioat: not in enabled drivers build config 00:05:49.601 dma/skeleton: not in enabled drivers build config 00:05:49.601 net/af_packet: not in enabled drivers build config 00:05:49.601 net/af_xdp: not in enabled drivers build config 00:05:49.601 net/ark: not in enabled drivers build config 00:05:49.601 net/atlantic: not in enabled drivers build config 00:05:49.601 net/avp: not in enabled drivers build config 00:05:49.601 net/axgbe: not in enabled drivers build config 00:05:49.601 net/bnx2x: not in enabled drivers build config 00:05:49.601 net/bnxt: not in enabled drivers build config 00:05:49.601 net/bonding: not in enabled drivers build config 00:05:49.601 net/cnxk: not in enabled drivers build config 00:05:49.601 net/cxgbe: not in enabled drivers build config 00:05:49.601 net/dpaa: not in enabled drivers build config 00:05:49.601 net/dpaa2: not in enabled drivers build config 00:05:49.601 net/e1000: not in enabled drivers build config 00:05:49.601 net/ena: not in enabled drivers build config 00:05:49.601 net/enetc: not in enabled drivers build config 00:05:49.601 net/enetfec: not in enabled drivers build config 00:05:49.601 net/enic: not in enabled drivers build config 00:05:49.601 net/failsafe: not in enabled drivers build config 00:05:49.601 net/fm10k: not in enabled drivers build config 00:05:49.601 net/gve: not in enabled drivers build config 00:05:49.601 net/hinic: not in enabled drivers build config 00:05:49.601 net/hns3: not in enabled drivers build config 00:05:49.601 net/iavf: not in enabled drivers build config 00:05:49.601 net/ice: not in enabled drivers build config 00:05:49.601 net/idpf: not in enabled drivers build config 00:05:49.601 net/igc: not in enabled drivers build config 00:05:49.601 net/ionic: not in enabled drivers build config 00:05:49.601 net/ipn3ke: not in enabled drivers build config 00:05:49.601 net/ixgbe: not in enabled drivers build config 00:05:49.601 net/kni: not in enabled drivers build config 00:05:49.601 net/liquidio: not in enabled drivers build config 00:05:49.601 net/mana: not in enabled drivers build config 00:05:49.601 net/memif: not in enabled drivers build config 00:05:49.601 net/mlx4: not in enabled drivers build config 00:05:49.601 net/mlx5: not in enabled drivers build config 00:05:49.601 net/mvneta: not in enabled drivers build config 00:05:49.601 net/mvpp2: not in enabled drivers build config 00:05:49.601 net/netvsc: not in enabled drivers build config 00:05:49.601 net/nfb: not in enabled drivers build config 00:05:49.601 net/nfp: not in enabled drivers build config 00:05:49.601 net/ngbe: not in enabled drivers build config 00:05:49.601 net/null: not in enabled drivers build config 00:05:49.601 net/octeontx: not in enabled drivers build config 00:05:49.601 net/octeon_ep: not in enabled drivers build config 00:05:49.601 net/pcap: not in enabled drivers build config 00:05:49.601 net/pfe: not in enabled drivers build config 00:05:49.601 net/qede: not in enabled drivers build config 00:05:49.601 net/ring: not in enabled drivers build config 00:05:49.601 net/sfc: not in enabled drivers build config 00:05:49.601 net/softnic: not in enabled drivers build config 00:05:49.601 net/tap: not in enabled drivers build config 00:05:49.601 net/thunderx: not in enabled drivers build config 00:05:49.601 net/txgbe: not in enabled drivers build config 00:05:49.601 net/vdev_netvsc: not in enabled drivers build config 00:05:49.601 net/vhost: not in enabled drivers build config 00:05:49.601 net/virtio: not in enabled drivers build config 00:05:49.601 net/vmxnet3: not in enabled drivers build config 00:05:49.601 raw/cnxk_bphy: not in enabled drivers build config 00:05:49.601 raw/cnxk_gpio: not in enabled drivers build config 00:05:49.601 raw/dpaa2_cmdif: not in enabled drivers build config 00:05:49.601 raw/ifpga: not in enabled drivers build config 00:05:49.601 raw/ntb: not in enabled drivers build config 00:05:49.601 raw/skeleton: not in enabled drivers build config 00:05:49.601 crypto/armv8: not in enabled drivers build config 00:05:49.601 crypto/bcmfs: not in enabled drivers build config 00:05:49.601 crypto/caam_jr: not in enabled drivers build config 00:05:49.601 crypto/ccp: not in enabled drivers build config 00:05:49.601 crypto/cnxk: not in enabled drivers build config 00:05:49.601 crypto/dpaa_sec: not in enabled drivers build config 00:05:49.601 crypto/dpaa2_sec: not in enabled drivers build config 00:05:49.601 crypto/ipsec_mb: not in enabled drivers build config 00:05:49.601 crypto/mlx5: not in enabled drivers build config 00:05:49.601 crypto/mvsam: not in enabled drivers build config 00:05:49.601 crypto/nitrox: not in enabled drivers build config 00:05:49.601 crypto/null: not in enabled drivers build config 00:05:49.601 crypto/octeontx: not in enabled drivers build config 00:05:49.601 crypto/openssl: not in enabled drivers build config 00:05:49.601 crypto/scheduler: not in enabled drivers build config 00:05:49.601 crypto/uadk: not in enabled drivers build config 00:05:49.601 crypto/virtio: not in enabled drivers build config 00:05:49.601 compress/isal: not in enabled drivers build config 00:05:49.601 compress/mlx5: not in enabled drivers build config 00:05:49.601 compress/octeontx: not in enabled drivers build config 00:05:49.601 compress/zlib: not in enabled drivers build config 00:05:49.601 regex/mlx5: not in enabled drivers build config 00:05:49.601 regex/cn9k: not in enabled drivers build config 00:05:49.601 vdpa/ifc: not in enabled drivers build config 00:05:49.601 vdpa/mlx5: not in enabled drivers build config 00:05:49.601 vdpa/sfc: not in enabled drivers build config 00:05:49.601 event/cnxk: not in enabled drivers build config 00:05:49.601 event/dlb2: not in enabled drivers build config 00:05:49.601 event/dpaa: not in enabled drivers build config 00:05:49.601 event/dpaa2: not in enabled drivers build config 00:05:49.601 event/dsw: not in enabled drivers build config 00:05:49.601 event/opdl: not in enabled drivers build config 00:05:49.601 event/skeleton: not in enabled drivers build config 00:05:49.601 event/sw: not in enabled drivers build config 00:05:49.601 event/octeontx: not in enabled drivers build config 00:05:49.601 baseband/acc: not in enabled drivers build config 00:05:49.601 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:05:49.601 baseband/fpga_lte_fec: not in enabled drivers build config 00:05:49.601 baseband/la12xx: not in enabled drivers build config 00:05:49.601 baseband/null: not in enabled drivers build config 00:05:49.601 baseband/turbo_sw: not in enabled drivers build config 00:05:49.601 gpu/cuda: not in enabled drivers build config 00:05:49.601 00:05:49.601 00:05:49.601 Build targets in project: 314 00:05:49.601 00:05:49.601 DPDK 22.11.4 00:05:49.601 00:05:49.601 User defined options 00:05:49.601 libdir : lib 00:05:49.602 prefix : /home/vagrant/spdk_repo/dpdk/build 00:05:49.602 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:05:49.602 c_link_args : 00:05:49.602 enable_docs : false 00:05:49.602 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:05:49.602 enable_kmods : false 00:05:49.602 machine : native 00:05:49.602 tests : false 00:05:49.602 00:05:49.602 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:49.602 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:05:49.602 11:49:54 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:05:49.602 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:05:49.602 [1/743] Generating lib/rte_telemetry_mingw with a custom command 00:05:49.602 [2/743] Generating lib/rte_telemetry_def with a custom command 00:05:49.602 [3/743] Generating lib/rte_kvargs_def with a custom command 00:05:49.602 [4/743] Generating lib/rte_kvargs_mingw with a custom command 00:05:49.602 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:49.602 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:49.602 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:49.602 [8/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:49.602 [9/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:49.602 [10/743] Linking static target lib/librte_kvargs.a 00:05:49.602 [11/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:49.602 [12/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:49.602 [13/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:49.602 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:49.972 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:49.972 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:49.972 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:49.972 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:49.972 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:49.972 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.972 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:05:49.972 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:49.972 [23/743] Linking target lib/librte_kvargs.so.23.0 00:05:49.972 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:49.972 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:49.972 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:49.972 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:49.972 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:50.272 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:50.272 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:50.272 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:50.272 [32/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:50.272 [33/743] Linking static target lib/librte_telemetry.a 00:05:50.272 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:50.272 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:50.272 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:50.272 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:50.272 [38/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:05:50.272 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:50.272 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:50.272 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:50.529 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:50.529 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.529 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:50.529 [45/743] Linking target lib/librte_telemetry.so.23.0 00:05:50.529 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:50.529 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:50.529 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:50.529 [49/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:05:50.787 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:50.787 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:50.787 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:50.787 [53/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:50.787 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:50.787 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:50.787 [56/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:50.787 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:50.787 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:50.787 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:50.787 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:50.787 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:50.787 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:50.787 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:50.787 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:50.787 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:05:51.045 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:51.045 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:51.045 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:51.045 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:51.045 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:51.045 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:51.045 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:51.045 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:51.045 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:51.045 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:51.045 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:51.045 [77/743] Generating lib/rte_eal_def with a custom command 00:05:51.045 [78/743] Generating lib/rte_eal_mingw with a custom command 00:05:51.045 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:51.045 [80/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:51.045 [81/743] Generating lib/rte_ring_def with a custom command 00:05:51.318 [82/743] Generating lib/rte_ring_mingw with a custom command 00:05:51.318 [83/743] Generating lib/rte_rcu_def with a custom command 00:05:51.318 [84/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:51.318 [85/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:51.318 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:05:51.318 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:51.318 [88/743] Linking static target lib/librte_ring.a 00:05:51.318 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:51.318 [90/743] Generating lib/rte_mempool_def with a custom command 00:05:51.318 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:05:51.318 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:51.318 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:51.576 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.576 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:51.834 [96/743] Linking static target lib/librte_eal.a 00:05:51.834 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:51.834 [98/743] Generating lib/rte_mbuf_def with a custom command 00:05:51.834 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:51.834 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:51.834 [101/743] Generating lib/rte_mbuf_mingw with a custom command 00:05:52.092 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:52.092 [103/743] Linking static target lib/librte_rcu.a 00:05:52.092 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:52.092 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:52.350 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:52.350 [107/743] Linking static target lib/librte_mempool.a 00:05:52.350 [108/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:52.350 [109/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:52.350 [110/743] Generating lib/rte_net_def with a custom command 00:05:52.350 [111/743] Generating lib/rte_net_mingw with a custom command 00:05:52.350 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:52.609 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:52.609 [114/743] Generating lib/rte_meter_def with a custom command 00:05:52.609 [115/743] Generating lib/rte_meter_mingw with a custom command 00:05:52.609 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:52.609 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:52.609 [118/743] Linking static target lib/librte_meter.a 00:05:52.609 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:52.609 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:52.868 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:52.868 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:52.868 [123/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:52.868 [124/743] Linking static target lib/librte_net.a 00:05:53.126 [125/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:53.126 [126/743] Linking static target lib/librte_mbuf.a 00:05:53.126 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.126 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.385 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:53.385 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:53.385 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:53.385 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:53.644 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:53.644 [134/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.903 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:53.903 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:54.162 [137/743] Generating lib/rte_ethdev_def with a custom command 00:05:54.162 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:05:54.162 [139/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:54.162 [140/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:54.162 [141/743] Linking static target lib/librte_pci.a 00:05:54.162 [142/743] Generating lib/rte_pci_def with a custom command 00:05:54.162 [143/743] Generating lib/rte_pci_mingw with a custom command 00:05:54.162 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:54.162 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:54.421 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:54.421 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:54.421 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:54.421 [149/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:54.421 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:54.421 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:54.421 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:54.679 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:54.679 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:54.679 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:54.679 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:54.679 [157/743] Generating lib/rte_cmdline_def with a custom command 00:05:54.679 [158/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:54.679 [159/743] Generating lib/rte_cmdline_mingw with a custom command 00:05:54.679 [160/743] Generating lib/rte_metrics_def with a custom command 00:05:54.679 [161/743] Generating lib/rte_metrics_mingw with a custom command 00:05:54.679 [162/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:54.679 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:05:54.679 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:54.937 [165/743] Generating lib/rte_hash_def with a custom command 00:05:54.937 [166/743] Generating lib/rte_hash_mingw with a custom command 00:05:54.937 [167/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:54.937 [168/743] Generating lib/rte_timer_def with a custom command 00:05:54.937 [169/743] Generating lib/rte_timer_mingw with a custom command 00:05:54.937 [170/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:54.937 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:54.937 [172/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:54.937 [173/743] Linking static target lib/librte_cmdline.a 00:05:55.505 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:05:55.505 [175/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:55.505 [176/743] Linking static target lib/librte_metrics.a 00:05:55.505 [177/743] Linking static target lib/librte_timer.a 00:05:55.763 [178/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:55.763 [179/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:05:55.763 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:55.763 [181/743] Linking static target lib/librte_ethdev.a 00:05:55.763 [182/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:56.021 [183/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:05:56.021 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.588 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:05:56.588 [186/743] Generating lib/rte_acl_def with a custom command 00:05:56.588 [187/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:05:56.588 [188/743] Generating lib/rte_acl_mingw with a custom command 00:05:56.588 [189/743] Generating lib/rte_bbdev_def with a custom command 00:05:56.588 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:05:56.846 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:05:56.846 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:05:56.846 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:05:56.846 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:05:57.412 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:05:57.412 [196/743] Linking static target lib/librte_bitratestats.a 00:05:57.412 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:05:57.412 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.412 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:05:57.412 [200/743] Linking static target lib/librte_bbdev.a 00:05:57.670 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:05:57.670 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:57.670 [203/743] Linking static target lib/librte_hash.a 00:05:57.929 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:05:57.929 [205/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:05:57.929 [206/743] Linking static target lib/acl/libavx512_tmp.a 00:05:58.187 [207/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.187 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:05:58.187 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:05:58.446 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.446 [211/743] Generating lib/rte_bpf_def with a custom command 00:05:58.446 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:05:58.732 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:05:58.732 [214/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:05:58.732 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:05:58.732 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:05:58.732 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:05:58.732 [218/743] Linking static target lib/librte_acl.a 00:05:58.732 [219/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:05:58.732 [220/743] Linking static target lib/librte_cfgfile.a 00:05:58.732 [221/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:05:58.991 [222/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.991 [223/743] Generating lib/rte_compressdev_def with a custom command 00:05:59.250 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:05:59.250 [225/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:05:59.250 [226/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.250 [227/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:59.250 [228/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:05:59.250 [229/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.250 [230/743] Generating lib/rte_cryptodev_def with a custom command 00:05:59.250 [231/743] Generating lib/rte_cryptodev_mingw with a custom command 00:05:59.509 [232/743] Linking target lib/librte_eal.so.23.0 00:05:59.509 [233/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:59.509 [234/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:05:59.509 [235/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:05:59.510 [236/743] Linking static target lib/librte_bpf.a 00:05:59.510 [237/743] Linking target lib/librte_ring.so.23.0 00:05:59.510 [238/743] Linking target lib/librte_meter.so.23.0 00:05:59.766 [239/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:59.767 [240/743] Linking target lib/librte_pci.so.23.0 00:05:59.767 [241/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:05:59.767 [242/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:05:59.767 [243/743] Linking target lib/librte_rcu.so.23.0 00:05:59.767 [244/743] Linking target lib/librte_mempool.so.23.0 00:05:59.767 [245/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:59.767 [246/743] Linking target lib/librte_timer.so.23.0 00:05:59.767 [247/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:05:59.767 [248/743] Linking target lib/librte_acl.so.23.0 00:06:00.024 [249/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:06:00.024 [250/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:06:00.024 [251/743] Linking target lib/librte_mbuf.so.23.0 00:06:00.024 [252/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:06:00.024 [253/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:00.024 [254/743] Linking target lib/librte_cfgfile.so.23.0 00:06:00.024 [255/743] Linking static target lib/librte_compressdev.a 00:06:00.024 [256/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:06:00.024 [257/743] Generating lib/rte_distributor_def with a custom command 00:06:00.024 [258/743] Generating lib/rte_distributor_mingw with a custom command 00:06:00.024 [259/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:06:00.024 [260/743] Generating lib/rte_efd_def with a custom command 00:06:00.024 [261/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:06:00.024 [262/743] Generating lib/rte_efd_mingw with a custom command 00:06:00.024 [263/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:06:00.025 [264/743] Linking target lib/librte_net.so.23.0 00:06:00.283 [265/743] Linking target lib/librte_bbdev.so.23.0 00:06:00.283 [266/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:00.283 [267/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:06:00.283 [268/743] Linking target lib/librte_cmdline.so.23.0 00:06:00.283 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:06:00.283 [270/743] Linking static target lib/librte_distributor.a 00:06:00.541 [271/743] Linking target lib/librte_hash.so.23.0 00:06:00.541 [272/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:06:00.541 [273/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:00.800 [274/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:06:00.800 [275/743] Linking target lib/librte_ethdev.so.23.0 00:06:00.800 [276/743] Linking target lib/librte_distributor.so.23.0 00:06:00.800 [277/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:06:00.800 [278/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:06:00.800 [279/743] Linking target lib/librte_metrics.so.23.0 00:06:00.800 [280/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:06:01.060 [281/743] Linking target lib/librte_bpf.so.23.0 00:06:01.060 [282/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:01.060 [283/743] Linking target lib/librte_compressdev.so.23.0 00:06:01.060 [284/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:06:01.060 [285/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:06:01.060 [286/743] Generating lib/rte_eventdev_def with a custom command 00:06:01.060 [287/743] Linking target lib/librte_bitratestats.so.23.0 00:06:01.060 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:06:01.060 [289/743] Generating lib/rte_gpudev_def with a custom command 00:06:01.060 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:06:01.319 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:06:01.577 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:06:01.577 [293/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:06:01.577 [294/743] Linking static target lib/librte_efd.a 00:06:01.577 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:01.577 [296/743] Linking static target lib/librte_cryptodev.a 00:06:01.886 [297/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:06:01.886 [298/743] Linking target lib/librte_efd.so.23.0 00:06:02.144 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:06:02.144 [300/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:06:02.144 [301/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:06:02.144 [302/743] Linking static target lib/librte_gpudev.a 00:06:02.144 [303/743] Generating lib/rte_gro_def with a custom command 00:06:02.144 [304/743] Generating lib/rte_gro_mingw with a custom command 00:06:02.144 [305/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:06:02.144 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:06:02.402 [307/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:06:02.402 [308/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:06:02.660 [309/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:06:02.919 [310/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:06:02.919 [311/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:06:02.919 [312/743] Generating lib/rte_gso_def with a custom command 00:06:02.919 [313/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:06:02.919 [314/743] Generating lib/rte_gso_mingw with a custom command 00:06:02.919 [315/743] Linking static target lib/librte_gro.a 00:06:02.919 [316/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.919 [317/743] Linking target lib/librte_gpudev.so.23.0 00:06:03.177 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:06:03.177 [319/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.177 [320/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:06:03.177 [321/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:06:03.177 [322/743] Linking target lib/librte_gro.so.23.0 00:06:03.177 [323/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:06:03.177 [324/743] Linking static target lib/librte_eventdev.a 00:06:03.177 [325/743] Generating lib/rte_ip_frag_def with a custom command 00:06:03.177 [326/743] Generating lib/rte_ip_frag_mingw with a custom command 00:06:03.435 [327/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:06:03.435 [328/743] Linking static target lib/librte_jobstats.a 00:06:03.435 [329/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:06:03.435 [330/743] Linking static target lib/librte_gso.a 00:06:03.435 [331/743] Generating lib/rte_jobstats_def with a custom command 00:06:03.435 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:06:03.693 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.693 [334/743] Linking target lib/librte_gso.so.23.0 00:06:03.694 [335/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:06:03.694 [336/743] Generating lib/rte_latencystats_def with a custom command 00:06:03.694 [337/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:06:03.694 [338/743] Generating lib/rte_latencystats_mingw with a custom command 00:06:03.951 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:06:03.951 [340/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.951 [341/743] Generating lib/rte_lpm_def with a custom command 00:06:03.952 [342/743] Linking target lib/librte_jobstats.so.23.0 00:06:03.952 [343/743] Generating lib/rte_lpm_mingw with a custom command 00:06:03.952 [344/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:06:03.952 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:06:04.211 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:06:04.211 [347/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.211 [348/743] Linking static target lib/librte_ip_frag.a 00:06:04.211 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:06:04.211 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:06:04.469 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.469 [352/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:06:04.469 [353/743] Linking static target lib/librte_latencystats.a 00:06:04.727 [354/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:06:04.727 [355/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:06:04.727 [356/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:06:04.727 [357/743] Linking target lib/librte_ip_frag.so.23.0 00:06:04.727 [358/743] Generating lib/rte_member_def with a custom command 00:06:04.727 [359/743] Generating lib/rte_member_mingw with a custom command 00:06:04.727 [360/743] Generating lib/rte_pcapng_def with a custom command 00:06:04.727 [361/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:06:04.727 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:06:04.727 [363/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:06:04.727 [364/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.727 [365/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:04.985 [366/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:04.985 [367/743] Linking target lib/librte_latencystats.so.23.0 00:06:04.985 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:04.985 [369/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:05.244 [370/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:06:05.244 [371/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:06:05.244 [372/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:06:05.244 [373/743] Linking static target lib/librte_lpm.a 00:06:05.244 [374/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:05.244 [375/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:06:05.244 [376/743] Generating lib/rte_power_def with a custom command 00:06:05.244 [377/743] Linking target lib/librte_eventdev.so.23.0 00:06:05.244 [378/743] Generating lib/rte_power_mingw with a custom command 00:06:05.502 [379/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:06:05.502 [380/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:05.502 [381/743] Generating lib/rte_rawdev_def with a custom command 00:06:05.502 [382/743] Generating lib/rte_rawdev_mingw with a custom command 00:06:05.502 [383/743] Generating lib/rte_regexdev_def with a custom command 00:06:05.502 [384/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:06:05.502 [385/743] Generating lib/rte_regexdev_mingw with a custom command 00:06:05.502 [386/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:06:05.761 [387/743] Linking static target lib/librte_pcapng.a 00:06:05.761 [388/743] Linking target lib/librte_lpm.so.23.0 00:06:05.761 [389/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:05.761 [390/743] Generating lib/rte_dmadev_def with a custom command 00:06:05.761 [391/743] Generating lib/rte_dmadev_mingw with a custom command 00:06:05.761 [392/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:06:05.761 [393/743] Linking static target lib/librte_rawdev.a 00:06:05.762 [394/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:06:05.762 [395/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:05.762 [396/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:06:06.020 [397/743] Generating lib/rte_rib_def with a custom command 00:06:06.020 [398/743] Generating lib/rte_rib_mingw with a custom command 00:06:06.020 [399/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:06:06.021 [400/743] Generating lib/rte_reorder_def with a custom command 00:06:06.021 [401/743] Generating lib/rte_reorder_mingw with a custom command 00:06:06.021 [402/743] Linking target lib/librte_pcapng.so.23.0 00:06:06.021 [403/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:06.021 [404/743] Linking static target lib/librte_dmadev.a 00:06:06.021 [405/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:06.021 [406/743] Linking static target lib/librte_power.a 00:06:06.021 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:06:06.279 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:06.279 [409/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:06:06.279 [410/743] Linking target lib/librte_rawdev.so.23.0 00:06:06.279 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:06:06.279 [412/743] Linking static target lib/librte_regexdev.a 00:06:06.279 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:06:06.279 [414/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:06:06.279 [415/743] Linking static target lib/librte_member.a 00:06:06.279 [416/743] Generating lib/rte_sched_def with a custom command 00:06:06.279 [417/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:06:06.538 [418/743] Generating lib/rte_sched_mingw with a custom command 00:06:06.538 [419/743] Generating lib/rte_security_def with a custom command 00:06:06.538 [420/743] Generating lib/rte_security_mingw with a custom command 00:06:06.538 [421/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:06:06.538 [422/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:06:06.538 [423/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:06.797 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:06:06.797 [425/743] Linking target lib/librte_dmadev.so.23.0 00:06:06.797 [426/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:06:06.797 [427/743] Generating lib/rte_stack_def with a custom command 00:06:06.797 [428/743] Linking static target lib/librte_stack.a 00:06:06.797 [429/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:06:06.797 [430/743] Generating lib/rte_stack_mingw with a custom command 00:06:06.797 [431/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:06.797 [432/743] Linking static target lib/librte_reorder.a 00:06:06.797 [433/743] Linking target lib/librte_member.so.23.0 00:06:06.797 [434/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:06:06.797 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:07.055 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.055 [437/743] Linking target lib/librte_stack.so.23.0 00:06:07.055 [438/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:06:07.055 [439/743] Linking static target lib/librte_rib.a 00:06:07.055 [440/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.055 [441/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.055 [442/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.055 [443/743] Linking target lib/librte_regexdev.so.23.0 00:06:07.055 [444/743] Linking target lib/librte_power.so.23.0 00:06:07.055 [445/743] Linking target lib/librte_reorder.so.23.0 00:06:07.313 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:07.313 [447/743] Linking static target lib/librte_security.a 00:06:07.571 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.571 [449/743] Linking target lib/librte_rib.so.23.0 00:06:07.571 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:07.829 [451/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:07.829 [452/743] Generating lib/rte_vhost_def with a custom command 00:06:07.829 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:06:07.829 [454/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:06:07.829 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.829 [456/743] Linking target lib/librte_security.so.23.0 00:06:07.829 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:07.829 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:06:08.126 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:06:08.126 [460/743] Linking static target lib/librte_sched.a 00:06:08.384 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:06:08.384 [462/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:06:08.384 [463/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:08.384 [464/743] Linking target lib/librte_sched.so.23.0 00:06:08.642 [465/743] Generating lib/rte_ipsec_def with a custom command 00:06:08.642 [466/743] Generating lib/rte_ipsec_mingw with a custom command 00:06:08.642 [467/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:06:08.642 [468/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:06:08.642 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:08.899 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:06:08.899 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:06:09.156 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:06:09.156 [473/743] Generating lib/rte_fib_def with a custom command 00:06:09.156 [474/743] Generating lib/rte_fib_mingw with a custom command 00:06:09.156 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:06:09.156 [476/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:06:09.156 [477/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:06:09.156 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:06:09.414 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:06:09.672 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:06:09.672 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:06:09.672 [482/743] Linking static target lib/librte_ipsec.a 00:06:09.929 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:06:09.929 [484/743] Linking target lib/librte_ipsec.so.23.0 00:06:10.186 [485/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:06:10.186 [486/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:06:10.186 [487/743] Linking static target lib/librte_fib.a 00:06:10.186 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:06:10.186 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:06:10.186 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:06:10.443 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:06:10.443 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:06:10.443 [493/743] Linking target lib/librte_fib.so.23.0 00:06:10.702 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:06:11.270 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:06:11.270 [496/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:06:11.270 [497/743] Generating lib/rte_port_def with a custom command 00:06:11.270 [498/743] Generating lib/rte_port_mingw with a custom command 00:06:11.270 [499/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:06:11.270 [500/743] Generating lib/rte_pdump_def with a custom command 00:06:11.270 [501/743] Generating lib/rte_pdump_mingw with a custom command 00:06:11.270 [502/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:06:11.270 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:06:11.528 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:06:11.528 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:06:11.786 [506/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:06:11.786 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:06:11.786 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:06:11.786 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:06:12.044 [510/743] Linking static target lib/librte_port.a 00:06:12.302 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:06:12.302 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:06:12.302 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:06:12.302 [514/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:06:12.561 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:06:12.561 [516/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:06:12.561 [517/743] Linking static target lib/librte_pdump.a 00:06:12.561 [518/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.561 [519/743] Linking target lib/librte_port.so.23.0 00:06:12.820 [520/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:06:12.820 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.820 [522/743] Linking target lib/librte_pdump.so.23.0 00:06:13.078 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:06:13.078 [524/743] Generating lib/rte_table_def with a custom command 00:06:13.078 [525/743] Generating lib/rte_table_mingw with a custom command 00:06:13.078 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:06:13.337 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:06:13.337 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:06:13.337 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:06:13.595 [530/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:13.595 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:06:13.595 [532/743] Generating lib/rte_pipeline_def with a custom command 00:06:13.595 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:06:13.595 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:06:13.595 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:06:13.854 [536/743] Linking static target lib/librte_table.a 00:06:13.854 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:06:14.112 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:06:14.371 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:06:14.371 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:06:14.371 [541/743] Linking target lib/librte_table.so.23.0 00:06:14.371 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:06:14.629 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:06:14.629 [544/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:06:14.629 [545/743] Generating lib/rte_graph_def with a custom command 00:06:14.629 [546/743] Generating lib/rte_graph_mingw with a custom command 00:06:14.888 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:06:14.888 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:06:15.147 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:06:15.147 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:06:15.147 [551/743] Linking static target lib/librte_graph.a 00:06:15.406 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:06:15.406 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:06:15.406 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:06:15.665 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:06:15.923 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:06:15.923 [557/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:06:15.923 [558/743] Generating lib/rte_node_def with a custom command 00:06:15.923 [559/743] Generating lib/rte_node_mingw with a custom command 00:06:15.923 [560/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:06:15.923 [561/743] Linking target lib/librte_graph.so.23.0 00:06:15.923 [562/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:16.182 [563/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:06:16.182 [564/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:06:16.182 [565/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:06:16.182 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:16.182 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:06:16.182 [568/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:06:16.182 [569/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:16.443 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:16.443 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:06:16.443 [572/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:16.443 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:06:16.443 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:06:16.443 [575/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:06:16.443 [576/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:06:16.443 [577/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:06:16.443 [578/743] Linking static target lib/librte_node.a 00:06:16.443 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:16.443 [580/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:16.443 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:16.708 [582/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:16.708 [583/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:16.708 [584/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:06:16.708 [585/743] Linking static target drivers/librte_bus_vdev.a 00:06:16.708 [586/743] Linking target lib/librte_node.so.23.0 00:06:16.967 [587/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:16.967 [588/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:16.967 [589/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:16.967 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:16.967 [591/743] Linking target drivers/librte_bus_vdev.so.23.0 00:06:17.225 [592/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:17.225 [593/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:17.225 [594/743] Linking static target drivers/librte_bus_pci.a 00:06:17.225 [595/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:06:17.225 [596/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:17.485 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:06:17.485 [598/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:17.485 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:06:17.485 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:06:17.485 [601/743] Linking target drivers/librte_bus_pci.so.23.0 00:06:17.744 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:06:17.744 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:17.744 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:18.002 [605/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:06:18.002 [606/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:18.002 [607/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:18.002 [608/743] Linking static target drivers/librte_mempool_ring.a 00:06:18.002 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:18.002 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:06:18.569 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:06:18.828 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:06:18.828 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:06:18.828 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:06:19.394 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:06:19.394 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:06:19.394 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:06:19.652 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:06:19.910 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:06:20.167 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:06:20.425 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:06:20.425 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:06:20.425 [623/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:06:20.425 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:06:20.425 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:06:21.357 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:06:21.614 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:06:21.614 [628/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:06:21.872 [629/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:06:21.872 [630/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:06:21.872 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:06:21.872 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:06:21.872 [633/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:06:21.872 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:06:22.439 [635/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:06:22.439 [636/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:06:22.439 [637/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:06:22.439 [638/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:06:22.697 [639/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:06:22.956 [640/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:06:22.956 [641/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:06:22.956 [642/743] Linking static target drivers/librte_net_i40e.a 00:06:22.956 [643/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:06:22.956 [644/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:06:22.956 [645/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:23.214 [646/743] Linking static target lib/librte_vhost.a 00:06:23.214 [647/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:06:23.215 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:06:23.472 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:06:23.472 [650/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:06:23.730 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:06:23.730 [652/743] Linking target drivers/librte_net_i40e.so.23.0 00:06:23.730 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:06:23.730 [654/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:06:23.988 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:06:24.247 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:06:24.247 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:06:24.247 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.505 [659/743] Linking target lib/librte_vhost.so.23.0 00:06:24.763 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:06:24.763 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:06:24.763 [662/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:06:24.763 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:06:24.763 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:06:24.763 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:06:25.021 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:06:25.021 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:06:25.279 [668/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:06:25.279 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:06:25.537 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:06:25.795 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:06:25.795 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:06:25.795 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:06:26.361 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:06:26.361 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:06:26.642 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:06:26.642 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:06:26.899 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:06:26.899 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:06:27.157 [680/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:06:27.157 [681/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:06:27.157 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:06:27.415 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:06:27.415 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:06:27.415 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:06:27.673 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:06:27.673 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:06:27.673 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:06:27.931 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:06:27.931 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:06:28.190 [691/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:06:28.190 [692/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:06:28.190 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:06:28.190 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:06:28.756 [695/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:06:28.756 [696/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:06:28.756 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:06:29.014 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:06:29.272 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:06:29.838 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:06:29.838 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:06:29.838 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:06:29.838 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:06:30.096 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:06:30.096 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:06:30.096 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:06:30.662 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:06:30.921 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:06:30.921 [709/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:06:30.921 [710/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:06:30.921 [711/743] Linking static target lib/librte_pipeline.a 00:06:30.921 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:06:31.487 [713/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:06:31.487 [714/743] Linking target app/dpdk-dumpcap 00:06:31.487 [715/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:06:31.487 [716/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:06:31.487 [717/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:06:31.746 [718/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:06:31.746 [719/743] Linking target app/dpdk-pdump 00:06:32.005 [720/743] Linking target app/dpdk-test-acl 00:06:32.005 [721/743] Linking target app/dpdk-proc-info 00:06:32.005 [722/743] Linking target app/dpdk-test-bbdev 00:06:32.005 [723/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:06:32.005 [724/743] Linking target app/dpdk-test-cmdline 00:06:32.005 [725/743] Linking target app/dpdk-test-compress-perf 00:06:32.263 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:06:32.263 [727/743] Linking target app/dpdk-test-crypto-perf 00:06:32.263 [728/743] Linking target app/dpdk-test-eventdev 00:06:32.263 [729/743] Linking target app/dpdk-test-fib 00:06:32.521 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:06:32.522 [731/743] Linking target app/dpdk-test-flow-perf 00:06:32.522 [732/743] Linking target app/dpdk-test-pipeline 00:06:32.522 [733/743] Linking target app/dpdk-test-gpudev 00:06:33.088 [734/743] Linking target app/dpdk-testpmd 00:06:33.088 [735/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:06:33.088 [736/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:06:33.348 [737/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:06:33.348 [738/743] Linking target app/dpdk-test-sad 00:06:33.607 [739/743] Linking target app/dpdk-test-regex 00:06:33.866 [740/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:06:34.123 [741/743] Linking target app/dpdk-test-security-perf 00:06:34.382 [742/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:34.382 [743/743] Linking target lib/librte_pipeline.so.23.0 00:06:34.382 11:50:39 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:06:34.382 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:06:34.641 [0/1] Installing files. 00:06:34.913 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:06:34.913 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:34.917 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:34.918 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:06:34.919 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:06:34.919 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:34.919 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:34.919 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:34.920 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:34.920 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:34.920 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:34.920 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:34.920 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:34.920 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:34.920 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:34.920 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:34.920 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:34.920 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:34.920 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:34.920 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.214 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:06:35.215 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:06:35.215 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:06:35.215 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.215 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:06:35.215 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.215 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.215 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.215 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.215 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.215 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.215 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.215 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.215 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.477 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.477 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.477 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.477 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.477 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.477 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.477 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.477 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.477 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.478 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.479 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:06:35.480 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:06:35.480 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:06:35.480 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:06:35.480 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:06:35.480 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:06:35.480 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:06:35.480 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:06:35.480 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:06:35.480 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:06:35.480 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:06:35.480 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:06:35.480 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:06:35.480 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:06:35.480 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:06:35.480 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:06:35.480 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:06:35.480 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:06:35.480 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:06:35.480 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:06:35.480 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:06:35.480 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:06:35.480 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:06:35.480 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:06:35.480 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:06:35.480 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:06:35.480 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:06:35.480 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:06:35.480 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:06:35.480 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:06:35.480 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:06:35.480 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:06:35.480 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:06:35.480 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:06:35.480 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:06:35.480 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:06:35.480 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:06:35.480 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:06:35.480 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:06:35.480 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:06:35.480 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:06:35.480 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:06:35.480 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:06:35.480 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:06:35.480 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:06:35.480 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:06:35.480 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:06:35.480 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:06:35.480 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:06:35.480 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:06:35.480 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:06:35.480 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:06:35.480 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:06:35.480 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:06:35.480 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:06:35.480 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:06:35.480 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:06:35.480 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:06:35.480 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:06:35.480 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:06:35.480 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:06:35.480 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:06:35.480 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:06:35.480 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:06:35.480 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:06:35.480 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:06:35.480 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:06:35.480 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:06:35.480 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:06:35.480 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:06:35.480 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:06:35.480 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:06:35.480 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:06:35.480 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:06:35.480 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:06:35.480 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:06:35.480 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:06:35.480 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:06:35.480 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:06:35.480 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:06:35.480 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:06:35.480 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:06:35.480 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:06:35.480 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:06:35.480 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:06:35.481 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:06:35.481 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:06:35.481 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:06:35.481 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:06:35.481 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:06:35.481 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:06:35.481 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:06:35.481 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:06:35.481 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:06:35.481 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:06:35.481 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:06:35.481 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:06:35.481 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:06:35.481 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:06:35.481 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:06:35.481 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:06:35.481 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:06:35.481 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:06:35.481 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:06:35.481 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:06:35.481 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:06:35.481 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:06:35.481 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:06:35.481 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:06:35.481 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:06:35.481 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:06:35.481 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:06:35.481 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:06:35.481 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:06:35.481 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:06:35.481 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:06:35.481 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:06:35.481 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:06:35.481 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:06:35.481 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:35.481 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:06:35.481 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:35.481 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:06:35.481 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:35.481 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:06:35.481 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:35.481 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:06:35.740 11:50:41 -- common/autobuild_common.sh@192 -- $ uname -s 00:06:35.740 11:50:41 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:06:35.740 11:50:41 -- common/autobuild_common.sh@203 -- $ cat 00:06:35.740 ************************************ 00:06:35.740 END TEST build_native_dpdk 00:06:35.740 ************************************ 00:06:35.740 11:50:41 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:35.740 00:06:35.740 real 0m53.338s 00:06:35.740 user 6m8.830s 00:06:35.740 sys 1m5.558s 00:06:35.740 11:50:41 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:06:35.740 11:50:41 -- common/autotest_common.sh@10 -- $ set +x 00:06:35.740 11:50:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:35.740 11:50:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:35.740 11:50:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:35.740 11:50:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:35.740 11:50:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:35.740 11:50:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:35.740 11:50:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:35.740 11:50:41 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:06:35.740 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:06:35.998 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:06:35.998 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:06:35.998 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:36.565 Using 'verbs' RDMA provider 00:06:52.010 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:07:04.226 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:07:04.226 Creating mk/config.mk...done. 00:07:04.226 Creating mk/cc.flags.mk...done. 00:07:04.226 Type 'make' to build. 00:07:04.226 11:51:09 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:07:04.226 11:51:09 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:07:04.226 11:51:09 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:07:04.226 11:51:09 -- common/autotest_common.sh@10 -- $ set +x 00:07:04.226 ************************************ 00:07:04.226 START TEST make 00:07:04.226 ************************************ 00:07:04.226 11:51:09 -- common/autotest_common.sh@1114 -- $ make -j10 00:07:04.226 make[1]: Nothing to be done for 'all'. 00:07:30.772 CC lib/ut/ut.o 00:07:30.772 CC lib/log/log.o 00:07:30.772 CC lib/log/log_deprecated.o 00:07:30.772 CC lib/log/log_flags.o 00:07:30.772 CC lib/ut_mock/mock.o 00:07:30.772 LIB libspdk_ut_mock.a 00:07:30.772 LIB libspdk_log.a 00:07:30.772 LIB libspdk_ut.a 00:07:30.772 SO libspdk_ut_mock.so.5.0 00:07:30.772 SO libspdk_log.so.6.1 00:07:30.772 SO libspdk_ut.so.1.0 00:07:30.772 SYMLINK libspdk_ut_mock.so 00:07:30.772 SYMLINK libspdk_log.so 00:07:30.772 SYMLINK libspdk_ut.so 00:07:30.772 CC lib/dma/dma.o 00:07:30.772 CXX lib/trace_parser/trace.o 00:07:30.772 CC lib/util/base64.o 00:07:30.772 CC lib/util/bit_array.o 00:07:30.772 CC lib/util/crc16.o 00:07:30.772 CC lib/util/cpuset.o 00:07:30.772 CC lib/util/crc32.o 00:07:30.772 CC lib/util/crc32c.o 00:07:30.772 CC lib/ioat/ioat.o 00:07:30.772 CC lib/vfio_user/host/vfio_user_pci.o 00:07:30.772 CC lib/util/crc32_ieee.o 00:07:30.772 CC lib/util/crc64.o 00:07:30.772 CC lib/util/dif.o 00:07:30.772 CC lib/util/fd.o 00:07:30.772 CC lib/vfio_user/host/vfio_user.o 00:07:30.772 LIB libspdk_dma.a 00:07:30.772 CC lib/util/file.o 00:07:30.772 SO libspdk_dma.so.3.0 00:07:30.772 CC lib/util/hexlify.o 00:07:30.772 CC lib/util/iov.o 00:07:30.772 SYMLINK libspdk_dma.so 00:07:30.772 CC lib/util/math.o 00:07:30.772 LIB libspdk_ioat.a 00:07:30.772 CC lib/util/pipe.o 00:07:30.772 SO libspdk_ioat.so.6.0 00:07:30.772 CC lib/util/strerror_tls.o 00:07:30.772 CC lib/util/string.o 00:07:30.772 CC lib/util/uuid.o 00:07:30.772 SYMLINK libspdk_ioat.so 00:07:30.772 CC lib/util/fd_group.o 00:07:30.772 LIB libspdk_vfio_user.a 00:07:30.772 SO libspdk_vfio_user.so.4.0 00:07:30.772 CC lib/util/xor.o 00:07:30.772 CC lib/util/zipf.o 00:07:30.772 SYMLINK libspdk_vfio_user.so 00:07:30.772 LIB libspdk_util.a 00:07:30.772 SO libspdk_util.so.8.0 00:07:30.772 SYMLINK libspdk_util.so 00:07:30.772 LIB libspdk_trace_parser.a 00:07:30.772 SO libspdk_trace_parser.so.4.0 00:07:30.772 CC lib/conf/conf.o 00:07:30.772 CC lib/rdma/common.o 00:07:30.772 CC lib/rdma/rdma_verbs.o 00:07:30.772 CC lib/env_dpdk/env.o 00:07:30.772 CC lib/env_dpdk/memory.o 00:07:30.772 CC lib/env_dpdk/pci.o 00:07:30.772 CC lib/vmd/vmd.o 00:07:30.772 CC lib/json/json_parse.o 00:07:30.772 CC lib/idxd/idxd.o 00:07:30.772 SYMLINK libspdk_trace_parser.so 00:07:30.772 CC lib/idxd/idxd_user.o 00:07:30.772 CC lib/idxd/idxd_kernel.o 00:07:30.772 LIB libspdk_conf.a 00:07:30.772 CC lib/json/json_util.o 00:07:30.772 SO libspdk_conf.so.5.0 00:07:30.772 CC lib/vmd/led.o 00:07:30.772 LIB libspdk_rdma.a 00:07:30.772 SYMLINK libspdk_conf.so 00:07:30.772 CC lib/env_dpdk/init.o 00:07:30.772 SO libspdk_rdma.so.5.0 00:07:30.772 CC lib/env_dpdk/threads.o 00:07:30.772 CC lib/json/json_write.o 00:07:30.772 CC lib/env_dpdk/pci_ioat.o 00:07:30.772 SYMLINK libspdk_rdma.so 00:07:30.772 CC lib/env_dpdk/pci_virtio.o 00:07:30.773 CC lib/env_dpdk/pci_vmd.o 00:07:30.773 CC lib/env_dpdk/pci_idxd.o 00:07:30.773 CC lib/env_dpdk/pci_event.o 00:07:30.773 CC lib/env_dpdk/sigbus_handler.o 00:07:30.773 LIB libspdk_idxd.a 00:07:30.773 CC lib/env_dpdk/pci_dpdk.o 00:07:30.773 SO libspdk_idxd.so.11.0 00:07:30.773 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:30.773 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:30.773 SYMLINK libspdk_idxd.so 00:07:30.773 LIB libspdk_json.a 00:07:30.773 LIB libspdk_vmd.a 00:07:30.773 SO libspdk_vmd.so.5.0 00:07:30.773 SO libspdk_json.so.5.1 00:07:30.773 SYMLINK libspdk_vmd.so 00:07:30.773 SYMLINK libspdk_json.so 00:07:30.773 CC lib/jsonrpc/jsonrpc_server.o 00:07:30.773 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:30.773 CC lib/jsonrpc/jsonrpc_client.o 00:07:30.773 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:30.773 LIB libspdk_jsonrpc.a 00:07:30.773 SO libspdk_jsonrpc.so.5.1 00:07:30.773 SYMLINK libspdk_jsonrpc.so 00:07:30.773 LIB libspdk_env_dpdk.a 00:07:30.773 SO libspdk_env_dpdk.so.13.0 00:07:30.773 CC lib/rpc/rpc.o 00:07:30.773 SYMLINK libspdk_env_dpdk.so 00:07:30.773 LIB libspdk_rpc.a 00:07:30.773 SO libspdk_rpc.so.5.0 00:07:30.773 SYMLINK libspdk_rpc.so 00:07:31.032 CC lib/sock/sock.o 00:07:31.032 CC lib/sock/sock_rpc.o 00:07:31.032 CC lib/trace/trace_flags.o 00:07:31.032 CC lib/trace/trace.o 00:07:31.032 CC lib/trace/trace_rpc.o 00:07:31.032 CC lib/notify/notify.o 00:07:31.032 CC lib/notify/notify_rpc.o 00:07:31.290 LIB libspdk_notify.a 00:07:31.290 SO libspdk_notify.so.5.0 00:07:31.290 SYMLINK libspdk_notify.so 00:07:31.290 LIB libspdk_trace.a 00:07:31.290 SO libspdk_trace.so.9.0 00:07:31.548 LIB libspdk_sock.a 00:07:31.548 SO libspdk_sock.so.8.0 00:07:31.548 SYMLINK libspdk_trace.so 00:07:31.548 SYMLINK libspdk_sock.so 00:07:31.548 CC lib/thread/iobuf.o 00:07:31.548 CC lib/thread/thread.o 00:07:31.807 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:31.807 CC lib/nvme/nvme_ctrlr.o 00:07:31.807 CC lib/nvme/nvme_fabric.o 00:07:31.807 CC lib/nvme/nvme_ns.o 00:07:31.807 CC lib/nvme/nvme_ns_cmd.o 00:07:31.807 CC lib/nvme/nvme_pcie_common.o 00:07:31.807 CC lib/nvme/nvme_pcie.o 00:07:31.807 CC lib/nvme/nvme_qpair.o 00:07:32.067 CC lib/nvme/nvme.o 00:07:32.325 CC lib/nvme/nvme_quirks.o 00:07:32.584 CC lib/nvme/nvme_transport.o 00:07:32.585 CC lib/nvme/nvme_discovery.o 00:07:32.585 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:32.585 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:32.585 CC lib/nvme/nvme_tcp.o 00:07:32.844 CC lib/nvme/nvme_opal.o 00:07:32.844 CC lib/nvme/nvme_io_msg.o 00:07:33.103 CC lib/nvme/nvme_poll_group.o 00:07:33.103 CC lib/nvme/nvme_zns.o 00:07:33.103 CC lib/nvme/nvme_cuse.o 00:07:33.103 LIB libspdk_thread.a 00:07:33.103 CC lib/nvme/nvme_vfio_user.o 00:07:33.362 SO libspdk_thread.so.9.0 00:07:33.362 SYMLINK libspdk_thread.so 00:07:33.362 CC lib/accel/accel.o 00:07:33.362 CC lib/blob/blobstore.o 00:07:33.362 CC lib/blob/request.o 00:07:33.620 CC lib/nvme/nvme_rdma.o 00:07:33.878 CC lib/blob/zeroes.o 00:07:33.878 CC lib/blob/blob_bs_dev.o 00:07:33.878 CC lib/accel/accel_rpc.o 00:07:33.878 CC lib/init/json_config.o 00:07:34.137 CC lib/init/subsystem.o 00:07:34.137 CC lib/virtio/virtio.o 00:07:34.137 CC lib/accel/accel_sw.o 00:07:34.137 CC lib/init/subsystem_rpc.o 00:07:34.137 CC lib/init/rpc.o 00:07:34.137 CC lib/virtio/virtio_vhost_user.o 00:07:34.137 CC lib/virtio/virtio_vfio_user.o 00:07:34.137 CC lib/virtio/virtio_pci.o 00:07:34.396 LIB libspdk_init.a 00:07:34.396 SO libspdk_init.so.4.0 00:07:34.396 SYMLINK libspdk_init.so 00:07:34.396 LIB libspdk_virtio.a 00:07:34.655 CC lib/event/app.o 00:07:34.655 CC lib/event/reactor.o 00:07:34.655 CC lib/event/log_rpc.o 00:07:34.655 CC lib/event/app_rpc.o 00:07:34.655 CC lib/event/scheduler_static.o 00:07:34.655 LIB libspdk_accel.a 00:07:34.655 SO libspdk_virtio.so.6.0 00:07:34.655 SO libspdk_accel.so.14.0 00:07:34.655 SYMLINK libspdk_virtio.so 00:07:34.655 SYMLINK libspdk_accel.so 00:07:34.914 CC lib/bdev/bdev.o 00:07:34.915 CC lib/bdev/bdev_rpc.o 00:07:34.915 CC lib/bdev/bdev_zone.o 00:07:34.915 CC lib/bdev/part.o 00:07:34.915 CC lib/bdev/scsi_nvme.o 00:07:34.915 LIB libspdk_nvme.a 00:07:34.915 LIB libspdk_event.a 00:07:34.915 SO libspdk_event.so.12.0 00:07:35.183 SYMLINK libspdk_event.so 00:07:35.183 SO libspdk_nvme.so.12.0 00:07:35.441 SYMLINK libspdk_nvme.so 00:07:36.376 LIB libspdk_blob.a 00:07:36.376 SO libspdk_blob.so.10.1 00:07:36.634 SYMLINK libspdk_blob.so 00:07:36.892 CC lib/lvol/lvol.o 00:07:36.892 CC lib/blobfs/blobfs.o 00:07:36.892 CC lib/blobfs/tree.o 00:07:37.827 LIB libspdk_bdev.a 00:07:37.827 LIB libspdk_blobfs.a 00:07:37.827 LIB libspdk_lvol.a 00:07:37.827 SO libspdk_blobfs.so.9.0 00:07:37.827 SO libspdk_lvol.so.9.1 00:07:37.827 SO libspdk_bdev.so.14.0 00:07:37.827 SYMLINK libspdk_blobfs.so 00:07:37.827 SYMLINK libspdk_lvol.so 00:07:37.827 SYMLINK libspdk_bdev.so 00:07:38.085 CC lib/ublk/ublk.o 00:07:38.085 CC lib/ublk/ublk_rpc.o 00:07:38.085 CC lib/nbd/nbd.o 00:07:38.085 CC lib/nbd/nbd_rpc.o 00:07:38.085 CC lib/ftl/ftl_core.o 00:07:38.085 CC lib/scsi/dev.o 00:07:38.085 CC lib/nvmf/ctrlr.o 00:07:38.085 CC lib/ftl/ftl_init.o 00:07:38.085 CC lib/scsi/lun.o 00:07:38.085 CC lib/nvmf/ctrlr_discovery.o 00:07:38.344 CC lib/ftl/ftl_layout.o 00:07:38.344 CC lib/ftl/ftl_debug.o 00:07:38.344 CC lib/nvmf/ctrlr_bdev.o 00:07:38.344 CC lib/scsi/port.o 00:07:38.344 CC lib/nvmf/subsystem.o 00:07:38.602 CC lib/nvmf/nvmf.o 00:07:38.602 LIB libspdk_nbd.a 00:07:38.602 CC lib/ftl/ftl_io.o 00:07:38.603 SO libspdk_nbd.so.6.0 00:07:38.603 CC lib/scsi/scsi.o 00:07:38.603 CC lib/ftl/ftl_sb.o 00:07:38.603 SYMLINK libspdk_nbd.so 00:07:38.603 CC lib/ftl/ftl_l2p.o 00:07:38.603 CC lib/ftl/ftl_l2p_flat.o 00:07:38.603 LIB libspdk_ublk.a 00:07:38.861 SO libspdk_ublk.so.2.0 00:07:38.861 CC lib/scsi/scsi_bdev.o 00:07:38.861 CC lib/scsi/scsi_pr.o 00:07:38.861 SYMLINK libspdk_ublk.so 00:07:38.861 CC lib/scsi/scsi_rpc.o 00:07:38.861 CC lib/scsi/task.o 00:07:38.861 CC lib/nvmf/nvmf_rpc.o 00:07:38.861 CC lib/ftl/ftl_nv_cache.o 00:07:38.861 CC lib/ftl/ftl_band.o 00:07:39.118 CC lib/ftl/ftl_band_ops.o 00:07:39.118 CC lib/ftl/ftl_writer.o 00:07:39.118 CC lib/ftl/ftl_rq.o 00:07:39.376 LIB libspdk_scsi.a 00:07:39.376 CC lib/ftl/ftl_reloc.o 00:07:39.376 CC lib/ftl/ftl_l2p_cache.o 00:07:39.376 CC lib/nvmf/transport.o 00:07:39.376 SO libspdk_scsi.so.8.0 00:07:39.376 CC lib/nvmf/tcp.o 00:07:39.376 CC lib/ftl/ftl_p2l.o 00:07:39.376 SYMLINK libspdk_scsi.so 00:07:39.376 CC lib/ftl/mngt/ftl_mngt.o 00:07:39.635 CC lib/nvmf/rdma.o 00:07:39.635 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:39.635 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:39.893 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:39.893 CC lib/iscsi/conn.o 00:07:39.893 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:39.893 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:39.893 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:39.893 CC lib/vhost/vhost.o 00:07:39.893 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:40.150 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:40.150 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:40.150 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:40.150 CC lib/iscsi/init_grp.o 00:07:40.150 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:40.150 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:40.150 CC lib/ftl/utils/ftl_conf.o 00:07:40.408 CC lib/ftl/utils/ftl_md.o 00:07:40.408 CC lib/ftl/utils/ftl_mempool.o 00:07:40.408 CC lib/iscsi/iscsi.o 00:07:40.408 CC lib/ftl/utils/ftl_bitmap.o 00:07:40.408 CC lib/ftl/utils/ftl_property.o 00:07:40.408 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:40.665 CC lib/iscsi/md5.o 00:07:40.665 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:40.666 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:40.666 CC lib/vhost/vhost_rpc.o 00:07:40.666 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:40.666 CC lib/vhost/vhost_scsi.o 00:07:40.666 CC lib/iscsi/param.o 00:07:40.924 CC lib/iscsi/portal_grp.o 00:07:40.924 CC lib/iscsi/tgt_node.o 00:07:40.924 CC lib/iscsi/iscsi_subsystem.o 00:07:40.924 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:40.924 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:41.183 CC lib/iscsi/iscsi_rpc.o 00:07:41.183 CC lib/iscsi/task.o 00:07:41.183 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:41.183 CC lib/vhost/vhost_blk.o 00:07:41.183 CC lib/vhost/rte_vhost_user.o 00:07:41.442 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:41.442 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:41.442 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:41.442 CC lib/ftl/base/ftl_base_dev.o 00:07:41.442 CC lib/ftl/base/ftl_base_bdev.o 00:07:41.442 CC lib/ftl/ftl_trace.o 00:07:41.701 LIB libspdk_ftl.a 00:07:41.960 LIB libspdk_nvmf.a 00:07:41.960 LIB libspdk_iscsi.a 00:07:41.960 SO libspdk_nvmf.so.17.0 00:07:41.960 SO libspdk_iscsi.so.7.0 00:07:41.960 SO libspdk_ftl.so.8.0 00:07:42.218 SYMLINK libspdk_nvmf.so 00:07:42.218 SYMLINK libspdk_iscsi.so 00:07:42.476 SYMLINK libspdk_ftl.so 00:07:42.476 LIB libspdk_vhost.a 00:07:42.476 SO libspdk_vhost.so.7.1 00:07:42.736 SYMLINK libspdk_vhost.so 00:07:42.995 CC module/env_dpdk/env_dpdk_rpc.o 00:07:42.995 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:42.995 CC module/sock/uring/uring.o 00:07:42.995 CC module/sock/posix/posix.o 00:07:42.995 CC module/accel/error/accel_error.o 00:07:42.995 CC module/blob/bdev/blob_bdev.o 00:07:42.995 CC module/accel/iaa/accel_iaa.o 00:07:42.995 CC module/accel/ioat/accel_ioat.o 00:07:42.995 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:42.995 CC module/accel/dsa/accel_dsa.o 00:07:42.995 LIB libspdk_env_dpdk_rpc.a 00:07:42.995 SO libspdk_env_dpdk_rpc.so.5.0 00:07:42.995 LIB libspdk_scheduler_dpdk_governor.a 00:07:43.254 SYMLINK libspdk_env_dpdk_rpc.so 00:07:43.254 SO libspdk_scheduler_dpdk_governor.so.3.0 00:07:43.254 CC module/accel/error/accel_error_rpc.o 00:07:43.254 CC module/accel/ioat/accel_ioat_rpc.o 00:07:43.254 CC module/accel/dsa/accel_dsa_rpc.o 00:07:43.254 CC module/accel/iaa/accel_iaa_rpc.o 00:07:43.254 LIB libspdk_scheduler_dynamic.a 00:07:43.254 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:43.254 SO libspdk_scheduler_dynamic.so.3.0 00:07:43.254 LIB libspdk_blob_bdev.a 00:07:43.254 SO libspdk_blob_bdev.so.10.1 00:07:43.254 SYMLINK libspdk_scheduler_dynamic.so 00:07:43.254 SYMLINK libspdk_blob_bdev.so 00:07:43.254 LIB libspdk_accel_dsa.a 00:07:43.254 LIB libspdk_accel_iaa.a 00:07:43.254 LIB libspdk_accel_error.a 00:07:43.254 LIB libspdk_accel_ioat.a 00:07:43.254 CC module/scheduler/gscheduler/gscheduler.o 00:07:43.254 SO libspdk_accel_iaa.so.2.0 00:07:43.254 SO libspdk_accel_error.so.1.0 00:07:43.254 SO libspdk_accel_dsa.so.4.0 00:07:43.254 SO libspdk_accel_ioat.so.5.0 00:07:43.513 SYMLINK libspdk_accel_iaa.so 00:07:43.513 SYMLINK libspdk_accel_error.so 00:07:43.513 SYMLINK libspdk_accel_dsa.so 00:07:43.513 SYMLINK libspdk_accel_ioat.so 00:07:43.513 CC module/bdev/delay/vbdev_delay.o 00:07:43.513 CC module/bdev/error/vbdev_error.o 00:07:43.513 CC module/blobfs/bdev/blobfs_bdev.o 00:07:43.513 LIB libspdk_scheduler_gscheduler.a 00:07:43.513 SO libspdk_scheduler_gscheduler.so.3.0 00:07:43.513 CC module/bdev/lvol/vbdev_lvol.o 00:07:43.513 CC module/bdev/null/bdev_null.o 00:07:43.513 CC module/bdev/gpt/gpt.o 00:07:43.513 CC module/bdev/malloc/bdev_malloc.o 00:07:43.513 SYMLINK libspdk_scheduler_gscheduler.so 00:07:43.513 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:43.773 LIB libspdk_sock_uring.a 00:07:43.773 SO libspdk_sock_uring.so.4.0 00:07:43.773 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:43.773 LIB libspdk_sock_posix.a 00:07:43.773 SO libspdk_sock_posix.so.5.0 00:07:43.773 CC module/bdev/gpt/vbdev_gpt.o 00:07:43.773 SYMLINK libspdk_sock_uring.so 00:07:43.773 CC module/bdev/null/bdev_null_rpc.o 00:07:43.773 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:43.773 CC module/bdev/error/vbdev_error_rpc.o 00:07:43.773 SYMLINK libspdk_sock_posix.so 00:07:43.773 LIB libspdk_blobfs_bdev.a 00:07:44.032 SO libspdk_blobfs_bdev.so.5.0 00:07:44.032 LIB libspdk_bdev_malloc.a 00:07:44.032 CC module/bdev/nvme/bdev_nvme.o 00:07:44.032 SO libspdk_bdev_malloc.so.5.0 00:07:44.032 LIB libspdk_bdev_null.a 00:07:44.032 CC module/bdev/passthru/vbdev_passthru.o 00:07:44.032 LIB libspdk_bdev_error.a 00:07:44.032 LIB libspdk_bdev_delay.a 00:07:44.032 CC module/bdev/raid/bdev_raid.o 00:07:44.032 SYMLINK libspdk_blobfs_bdev.so 00:07:44.032 SO libspdk_bdev_null.so.5.0 00:07:44.032 SO libspdk_bdev_error.so.5.0 00:07:44.032 SO libspdk_bdev_delay.so.5.0 00:07:44.032 SYMLINK libspdk_bdev_malloc.so 00:07:44.032 CC module/bdev/raid/bdev_raid_rpc.o 00:07:44.032 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:44.032 LIB libspdk_bdev_gpt.a 00:07:44.032 SYMLINK libspdk_bdev_null.so 00:07:44.032 CC module/bdev/raid/bdev_raid_sb.o 00:07:44.032 SO libspdk_bdev_gpt.so.5.0 00:07:44.032 SYMLINK libspdk_bdev_error.so 00:07:44.032 SYMLINK libspdk_bdev_delay.so 00:07:44.032 SYMLINK libspdk_bdev_gpt.so 00:07:44.291 CC module/bdev/split/vbdev_split.o 00:07:44.291 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:44.291 CC module/bdev/uring/bdev_uring.o 00:07:44.291 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:44.291 CC module/bdev/aio/bdev_aio.o 00:07:44.291 LIB libspdk_bdev_lvol.a 00:07:44.291 SO libspdk_bdev_lvol.so.5.0 00:07:44.291 CC module/bdev/split/vbdev_split_rpc.o 00:07:44.549 CC module/bdev/ftl/bdev_ftl.o 00:07:44.549 LIB libspdk_bdev_passthru.a 00:07:44.549 CC module/bdev/iscsi/bdev_iscsi.o 00:07:44.549 SYMLINK libspdk_bdev_lvol.so 00:07:44.549 SO libspdk_bdev_passthru.so.5.0 00:07:44.549 SYMLINK libspdk_bdev_passthru.so 00:07:44.549 CC module/bdev/uring/bdev_uring_rpc.o 00:07:44.549 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:44.549 LIB libspdk_bdev_split.a 00:07:44.549 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:44.549 SO libspdk_bdev_split.so.5.0 00:07:44.549 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:44.549 CC module/bdev/aio/bdev_aio_rpc.o 00:07:44.808 SYMLINK libspdk_bdev_split.so 00:07:44.808 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:44.808 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:44.808 LIB libspdk_bdev_uring.a 00:07:44.808 SO libspdk_bdev_uring.so.5.0 00:07:44.808 LIB libspdk_bdev_zone_block.a 00:07:44.808 SO libspdk_bdev_zone_block.so.5.0 00:07:44.808 LIB libspdk_bdev_aio.a 00:07:44.808 SYMLINK libspdk_bdev_uring.so 00:07:44.808 CC module/bdev/raid/raid0.o 00:07:44.808 CC module/bdev/raid/raid1.o 00:07:44.808 SO libspdk_bdev_aio.so.5.0 00:07:44.808 LIB libspdk_bdev_iscsi.a 00:07:44.808 SYMLINK libspdk_bdev_zone_block.so 00:07:44.808 CC module/bdev/raid/concat.o 00:07:44.808 CC module/bdev/nvme/nvme_rpc.o 00:07:44.808 SO libspdk_bdev_iscsi.so.5.0 00:07:45.067 SYMLINK libspdk_bdev_aio.so 00:07:45.067 LIB libspdk_bdev_ftl.a 00:07:45.067 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:45.067 SO libspdk_bdev_ftl.so.5.0 00:07:45.067 SYMLINK libspdk_bdev_iscsi.so 00:07:45.067 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:45.067 SYMLINK libspdk_bdev_ftl.so 00:07:45.067 CC module/bdev/nvme/bdev_mdns_client.o 00:07:45.067 CC module/bdev/nvme/vbdev_opal.o 00:07:45.067 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:45.067 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:45.067 LIB libspdk_bdev_raid.a 00:07:45.326 LIB libspdk_bdev_virtio.a 00:07:45.326 SO libspdk_bdev_raid.so.5.0 00:07:45.326 SO libspdk_bdev_virtio.so.5.0 00:07:45.326 SYMLINK libspdk_bdev_raid.so 00:07:45.326 SYMLINK libspdk_bdev_virtio.so 00:07:46.301 LIB libspdk_bdev_nvme.a 00:07:46.301 SO libspdk_bdev_nvme.so.6.0 00:07:46.560 SYMLINK libspdk_bdev_nvme.so 00:07:46.818 CC module/event/subsystems/vmd/vmd.o 00:07:46.819 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:46.819 CC module/event/subsystems/sock/sock.o 00:07:46.819 CC module/event/subsystems/scheduler/scheduler.o 00:07:46.819 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:46.819 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:46.819 CC module/event/subsystems/iobuf/iobuf.o 00:07:46.819 LIB libspdk_event_sock.a 00:07:47.078 LIB libspdk_event_vmd.a 00:07:47.078 SO libspdk_event_sock.so.4.0 00:07:47.078 LIB libspdk_event_vhost_blk.a 00:07:47.078 LIB libspdk_event_scheduler.a 00:07:47.078 SO libspdk_event_vmd.so.5.0 00:07:47.078 SO libspdk_event_vhost_blk.so.2.0 00:07:47.078 LIB libspdk_event_iobuf.a 00:07:47.078 SYMLINK libspdk_event_sock.so 00:07:47.078 SO libspdk_event_scheduler.so.3.0 00:07:47.078 SO libspdk_event_iobuf.so.2.0 00:07:47.078 SYMLINK libspdk_event_vmd.so 00:07:47.078 SYMLINK libspdk_event_vhost_blk.so 00:07:47.078 SYMLINK libspdk_event_iobuf.so 00:07:47.078 SYMLINK libspdk_event_scheduler.so 00:07:47.339 CC module/event/subsystems/accel/accel.o 00:07:47.339 LIB libspdk_event_accel.a 00:07:47.598 SO libspdk_event_accel.so.5.0 00:07:47.598 SYMLINK libspdk_event_accel.so 00:07:47.857 CC module/event/subsystems/bdev/bdev.o 00:07:47.857 LIB libspdk_event_bdev.a 00:07:48.116 SO libspdk_event_bdev.so.5.0 00:07:48.116 SYMLINK libspdk_event_bdev.so 00:07:48.116 CC module/event/subsystems/ublk/ublk.o 00:07:48.116 CC module/event/subsystems/scsi/scsi.o 00:07:48.116 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:48.116 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:48.375 CC module/event/subsystems/nbd/nbd.o 00:07:48.375 LIB libspdk_event_ublk.a 00:07:48.375 LIB libspdk_event_scsi.a 00:07:48.375 LIB libspdk_event_nbd.a 00:07:48.375 SO libspdk_event_ublk.so.2.0 00:07:48.375 SO libspdk_event_scsi.so.5.0 00:07:48.375 SO libspdk_event_nbd.so.5.0 00:07:48.633 SYMLINK libspdk_event_ublk.so 00:07:48.633 SYMLINK libspdk_event_scsi.so 00:07:48.633 LIB libspdk_event_nvmf.a 00:07:48.633 SYMLINK libspdk_event_nbd.so 00:07:48.633 SO libspdk_event_nvmf.so.5.0 00:07:48.633 SYMLINK libspdk_event_nvmf.so 00:07:48.633 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:48.633 CC module/event/subsystems/iscsi/iscsi.o 00:07:48.891 LIB libspdk_event_vhost_scsi.a 00:07:48.891 LIB libspdk_event_iscsi.a 00:07:48.891 SO libspdk_event_vhost_scsi.so.2.0 00:07:48.891 SO libspdk_event_iscsi.so.5.0 00:07:48.891 SYMLINK libspdk_event_vhost_scsi.so 00:07:48.891 SYMLINK libspdk_event_iscsi.so 00:07:49.148 SO libspdk.so.5.0 00:07:49.148 SYMLINK libspdk.so 00:07:49.406 CC app/trace_record/trace_record.o 00:07:49.406 CXX app/trace/trace.o 00:07:49.406 TEST_HEADER include/spdk/accel.h 00:07:49.406 TEST_HEADER include/spdk/accel_module.h 00:07:49.406 TEST_HEADER include/spdk/assert.h 00:07:49.406 TEST_HEADER include/spdk/barrier.h 00:07:49.406 TEST_HEADER include/spdk/base64.h 00:07:49.406 TEST_HEADER include/spdk/bdev.h 00:07:49.406 TEST_HEADER include/spdk/bdev_module.h 00:07:49.406 TEST_HEADER include/spdk/bdev_zone.h 00:07:49.406 TEST_HEADER include/spdk/bit_array.h 00:07:49.406 TEST_HEADER include/spdk/bit_pool.h 00:07:49.406 TEST_HEADER include/spdk/blob_bdev.h 00:07:49.406 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:49.406 TEST_HEADER include/spdk/blobfs.h 00:07:49.406 TEST_HEADER include/spdk/blob.h 00:07:49.406 TEST_HEADER include/spdk/conf.h 00:07:49.406 TEST_HEADER include/spdk/config.h 00:07:49.406 TEST_HEADER include/spdk/cpuset.h 00:07:49.406 TEST_HEADER include/spdk/crc16.h 00:07:49.406 TEST_HEADER include/spdk/crc32.h 00:07:49.406 TEST_HEADER include/spdk/crc64.h 00:07:49.406 TEST_HEADER include/spdk/dif.h 00:07:49.406 TEST_HEADER include/spdk/dma.h 00:07:49.406 TEST_HEADER include/spdk/endian.h 00:07:49.406 TEST_HEADER include/spdk/env_dpdk.h 00:07:49.406 TEST_HEADER include/spdk/env.h 00:07:49.406 TEST_HEADER include/spdk/event.h 00:07:49.406 TEST_HEADER include/spdk/fd_group.h 00:07:49.406 TEST_HEADER include/spdk/fd.h 00:07:49.406 TEST_HEADER include/spdk/file.h 00:07:49.406 CC examples/accel/perf/accel_perf.o 00:07:49.406 TEST_HEADER include/spdk/ftl.h 00:07:49.406 TEST_HEADER include/spdk/gpt_spec.h 00:07:49.406 TEST_HEADER include/spdk/hexlify.h 00:07:49.406 TEST_HEADER include/spdk/histogram_data.h 00:07:49.406 TEST_HEADER include/spdk/idxd.h 00:07:49.406 TEST_HEADER include/spdk/idxd_spec.h 00:07:49.406 CC test/app/bdev_svc/bdev_svc.o 00:07:49.406 TEST_HEADER include/spdk/init.h 00:07:49.406 CC test/accel/dif/dif.o 00:07:49.406 TEST_HEADER include/spdk/ioat.h 00:07:49.406 TEST_HEADER include/spdk/ioat_spec.h 00:07:49.406 TEST_HEADER include/spdk/iscsi_spec.h 00:07:49.406 CC test/bdev/bdevio/bdevio.o 00:07:49.406 CC test/blobfs/mkfs/mkfs.o 00:07:49.406 TEST_HEADER include/spdk/json.h 00:07:49.406 TEST_HEADER include/spdk/jsonrpc.h 00:07:49.406 TEST_HEADER include/spdk/likely.h 00:07:49.406 TEST_HEADER include/spdk/log.h 00:07:49.406 TEST_HEADER include/spdk/lvol.h 00:07:49.406 CC test/dma/test_dma/test_dma.o 00:07:49.406 TEST_HEADER include/spdk/memory.h 00:07:49.406 TEST_HEADER include/spdk/mmio.h 00:07:49.406 TEST_HEADER include/spdk/nbd.h 00:07:49.406 CC test/env/mem_callbacks/mem_callbacks.o 00:07:49.406 TEST_HEADER include/spdk/notify.h 00:07:49.406 TEST_HEADER include/spdk/nvme.h 00:07:49.406 TEST_HEADER include/spdk/nvme_intel.h 00:07:49.406 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:49.406 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:49.406 TEST_HEADER include/spdk/nvme_spec.h 00:07:49.406 TEST_HEADER include/spdk/nvme_zns.h 00:07:49.406 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:49.406 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:49.406 TEST_HEADER include/spdk/nvmf.h 00:07:49.406 TEST_HEADER include/spdk/nvmf_spec.h 00:07:49.406 TEST_HEADER include/spdk/nvmf_transport.h 00:07:49.406 TEST_HEADER include/spdk/opal.h 00:07:49.406 TEST_HEADER include/spdk/opal_spec.h 00:07:49.406 TEST_HEADER include/spdk/pci_ids.h 00:07:49.406 TEST_HEADER include/spdk/pipe.h 00:07:49.406 TEST_HEADER include/spdk/queue.h 00:07:49.406 TEST_HEADER include/spdk/reduce.h 00:07:49.406 TEST_HEADER include/spdk/rpc.h 00:07:49.406 TEST_HEADER include/spdk/scheduler.h 00:07:49.406 TEST_HEADER include/spdk/scsi.h 00:07:49.406 TEST_HEADER include/spdk/scsi_spec.h 00:07:49.406 TEST_HEADER include/spdk/sock.h 00:07:49.406 TEST_HEADER include/spdk/stdinc.h 00:07:49.406 TEST_HEADER include/spdk/string.h 00:07:49.406 TEST_HEADER include/spdk/thread.h 00:07:49.406 TEST_HEADER include/spdk/trace.h 00:07:49.406 TEST_HEADER include/spdk/trace_parser.h 00:07:49.406 TEST_HEADER include/spdk/tree.h 00:07:49.406 TEST_HEADER include/spdk/ublk.h 00:07:49.406 TEST_HEADER include/spdk/util.h 00:07:49.406 TEST_HEADER include/spdk/uuid.h 00:07:49.406 TEST_HEADER include/spdk/version.h 00:07:49.675 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:49.675 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:49.675 TEST_HEADER include/spdk/vhost.h 00:07:49.675 TEST_HEADER include/spdk/vmd.h 00:07:49.675 TEST_HEADER include/spdk/xor.h 00:07:49.675 TEST_HEADER include/spdk/zipf.h 00:07:49.675 CXX test/cpp_headers/accel.o 00:07:49.675 LINK spdk_trace_record 00:07:49.675 LINK bdev_svc 00:07:49.675 LINK mem_callbacks 00:07:49.675 LINK mkfs 00:07:49.675 LINK spdk_trace 00:07:49.675 CXX test/cpp_headers/accel_module.o 00:07:49.933 LINK dif 00:07:49.933 CC test/env/vtophys/vtophys.o 00:07:49.933 LINK test_dma 00:07:49.933 CC app/nvmf_tgt/nvmf_main.o 00:07:49.933 LINK bdevio 00:07:49.933 LINK accel_perf 00:07:49.933 CXX test/cpp_headers/assert.o 00:07:49.933 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:49.933 CC app/iscsi_tgt/iscsi_tgt.o 00:07:49.933 CC test/event/event_perf/event_perf.o 00:07:49.933 LINK vtophys 00:07:49.933 LINK nvmf_tgt 00:07:50.192 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:50.192 CXX test/cpp_headers/barrier.o 00:07:50.192 CC test/event/reactor/reactor.o 00:07:50.192 LINK event_perf 00:07:50.192 CC app/spdk_tgt/spdk_tgt.o 00:07:50.192 LINK iscsi_tgt 00:07:50.192 CC examples/bdev/hello_world/hello_bdev.o 00:07:50.192 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:50.192 LINK reactor 00:07:50.192 CXX test/cpp_headers/base64.o 00:07:50.449 CC test/app/histogram_perf/histogram_perf.o 00:07:50.449 CC examples/bdev/bdevperf/bdevperf.o 00:07:50.449 LINK nvme_fuzz 00:07:50.449 LINK env_dpdk_post_init 00:07:50.449 LINK spdk_tgt 00:07:50.449 LINK hello_bdev 00:07:50.449 CXX test/cpp_headers/bdev.o 00:07:50.449 CC test/event/reactor_perf/reactor_perf.o 00:07:50.449 CC test/app/jsoncat/jsoncat.o 00:07:50.449 LINK histogram_perf 00:07:50.449 CXX test/cpp_headers/bdev_module.o 00:07:50.707 CC test/env/memory/memory_ut.o 00:07:50.707 LINK reactor_perf 00:07:50.707 LINK jsoncat 00:07:50.707 CXX test/cpp_headers/bdev_zone.o 00:07:50.707 CXX test/cpp_headers/bit_array.o 00:07:50.707 CC app/spdk_lspci/spdk_lspci.o 00:07:50.707 CC test/app/stub/stub.o 00:07:50.707 CXX test/cpp_headers/bit_pool.o 00:07:50.707 CC test/env/pci/pci_ut.o 00:07:50.707 LINK spdk_lspci 00:07:50.966 CC test/event/app_repeat/app_repeat.o 00:07:50.966 LINK stub 00:07:50.966 CC test/event/scheduler/scheduler.o 00:07:50.966 CXX test/cpp_headers/blob_bdev.o 00:07:50.966 LINK app_repeat 00:07:50.966 CC test/lvol/esnap/esnap.o 00:07:50.966 CC app/spdk_nvme_perf/perf.o 00:07:51.223 LINK memory_ut 00:07:51.223 LINK bdevperf 00:07:51.223 CXX test/cpp_headers/blobfs_bdev.o 00:07:51.223 LINK scheduler 00:07:51.224 CC test/nvme/aer/aer.o 00:07:51.224 CXX test/cpp_headers/blobfs.o 00:07:51.224 LINK pci_ut 00:07:51.224 CXX test/cpp_headers/blob.o 00:07:51.483 CC test/nvme/reset/reset.o 00:07:51.483 CC test/nvme/sgl/sgl.o 00:07:51.483 CC test/nvme/e2edp/nvme_dp.o 00:07:51.483 LINK aer 00:07:51.483 CXX test/cpp_headers/conf.o 00:07:51.483 CC examples/blob/hello_world/hello_blob.o 00:07:51.483 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:51.742 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:51.742 CXX test/cpp_headers/config.o 00:07:51.742 CXX test/cpp_headers/cpuset.o 00:07:51.742 LINK reset 00:07:51.742 CXX test/cpp_headers/crc16.o 00:07:51.742 LINK iscsi_fuzz 00:07:51.742 LINK nvme_dp 00:07:51.742 LINK sgl 00:07:51.742 LINK hello_blob 00:07:52.001 CXX test/cpp_headers/crc32.o 00:07:52.001 CXX test/cpp_headers/crc64.o 00:07:52.001 CC test/nvme/overhead/overhead.o 00:07:52.001 LINK spdk_nvme_perf 00:07:52.001 CXX test/cpp_headers/dif.o 00:07:52.001 CXX test/cpp_headers/dma.o 00:07:52.001 CC examples/blob/cli/blobcli.o 00:07:52.001 CXX test/cpp_headers/endian.o 00:07:52.001 LINK vhost_fuzz 00:07:52.001 CXX test/cpp_headers/env_dpdk.o 00:07:52.001 CXX test/cpp_headers/env.o 00:07:52.001 CXX test/cpp_headers/event.o 00:07:52.001 CC app/spdk_nvme_identify/identify.o 00:07:52.001 CC app/spdk_nvme_discover/discovery_aer.o 00:07:52.261 CXX test/cpp_headers/fd_group.o 00:07:52.261 CXX test/cpp_headers/fd.o 00:07:52.261 LINK overhead 00:07:52.261 CC test/nvme/err_injection/err_injection.o 00:07:52.261 LINK spdk_nvme_discover 00:07:52.261 CC app/spdk_top/spdk_top.o 00:07:52.261 CXX test/cpp_headers/file.o 00:07:52.520 CC app/spdk_dd/spdk_dd.o 00:07:52.520 CC app/vhost/vhost.o 00:07:52.520 LINK blobcli 00:07:52.520 LINK err_injection 00:07:52.520 CC app/fio/nvme/fio_plugin.o 00:07:52.520 CXX test/cpp_headers/ftl.o 00:07:52.520 CC app/fio/bdev/fio_plugin.o 00:07:52.520 LINK vhost 00:07:52.825 CC test/nvme/startup/startup.o 00:07:52.825 CXX test/cpp_headers/gpt_spec.o 00:07:52.825 CC examples/ioat/perf/perf.o 00:07:52.825 LINK spdk_dd 00:07:52.825 CC examples/ioat/verify/verify.o 00:07:52.825 LINK startup 00:07:52.825 LINK spdk_nvme_identify 00:07:52.825 CXX test/cpp_headers/hexlify.o 00:07:53.095 CXX test/cpp_headers/histogram_data.o 00:07:53.095 LINK ioat_perf 00:07:53.095 LINK spdk_nvme 00:07:53.095 CXX test/cpp_headers/idxd.o 00:07:53.095 LINK spdk_bdev 00:07:53.095 CC test/nvme/reserve/reserve.o 00:07:53.095 LINK verify 00:07:53.095 CC test/rpc_client/rpc_client_test.o 00:07:53.095 CXX test/cpp_headers/idxd_spec.o 00:07:53.095 CC test/nvme/simple_copy/simple_copy.o 00:07:53.095 CC test/nvme/connect_stress/connect_stress.o 00:07:53.095 CC test/nvme/boot_partition/boot_partition.o 00:07:53.354 LINK spdk_top 00:07:53.354 LINK reserve 00:07:53.354 CC examples/nvme/hello_world/hello_world.o 00:07:53.354 LINK rpc_client_test 00:07:53.354 CXX test/cpp_headers/init.o 00:07:53.354 CC examples/sock/hello_world/hello_sock.o 00:07:53.354 LINK connect_stress 00:07:53.354 LINK boot_partition 00:07:53.354 LINK simple_copy 00:07:53.613 CXX test/cpp_headers/ioat.o 00:07:53.613 CC examples/vmd/lsvmd/lsvmd.o 00:07:53.613 CXX test/cpp_headers/ioat_spec.o 00:07:53.613 LINK hello_world 00:07:53.613 CC examples/nvmf/nvmf/nvmf.o 00:07:53.613 CC examples/util/zipf/zipf.o 00:07:53.613 LINK hello_sock 00:07:53.613 CC test/nvme/compliance/nvme_compliance.o 00:07:53.613 LINK lsvmd 00:07:53.613 CC examples/thread/thread/thread_ex.o 00:07:53.613 CXX test/cpp_headers/iscsi_spec.o 00:07:53.872 LINK zipf 00:07:53.872 CC examples/nvme/reconnect/reconnect.o 00:07:53.872 CC examples/idxd/perf/perf.o 00:07:53.872 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:53.872 CC examples/vmd/led/led.o 00:07:53.872 CXX test/cpp_headers/json.o 00:07:53.872 LINK nvmf 00:07:53.872 LINK thread 00:07:53.872 CC test/nvme/fused_ordering/fused_ordering.o 00:07:53.872 LINK nvme_compliance 00:07:54.131 LINK led 00:07:54.131 LINK interrupt_tgt 00:07:54.131 CXX test/cpp_headers/jsonrpc.o 00:07:54.131 LINK idxd_perf 00:07:54.131 LINK reconnect 00:07:54.131 LINK fused_ordering 00:07:54.131 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:54.131 CC test/nvme/fdp/fdp.o 00:07:54.131 CC test/nvme/cuse/cuse.o 00:07:54.390 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:54.390 CC test/thread/poller_perf/poller_perf.o 00:07:54.390 CXX test/cpp_headers/likely.o 00:07:54.390 CC examples/nvme/arbitration/arbitration.o 00:07:54.390 LINK doorbell_aers 00:07:54.390 CC examples/nvme/hotplug/hotplug.o 00:07:54.390 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:54.390 LINK poller_perf 00:07:54.390 CXX test/cpp_headers/log.o 00:07:54.649 LINK fdp 00:07:54.649 CC examples/nvme/abort/abort.o 00:07:54.649 LINK cmb_copy 00:07:54.649 LINK hotplug 00:07:54.649 CXX test/cpp_headers/lvol.o 00:07:54.649 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:54.649 CXX test/cpp_headers/memory.o 00:07:54.649 LINK arbitration 00:07:54.649 LINK nvme_manage 00:07:54.907 CXX test/cpp_headers/mmio.o 00:07:54.907 CXX test/cpp_headers/nbd.o 00:07:54.907 CXX test/cpp_headers/notify.o 00:07:54.907 CXX test/cpp_headers/nvme.o 00:07:54.907 CXX test/cpp_headers/nvme_intel.o 00:07:54.907 LINK pmr_persistence 00:07:54.907 CXX test/cpp_headers/nvme_ocssd.o 00:07:54.907 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:54.907 CXX test/cpp_headers/nvme_spec.o 00:07:54.907 LINK abort 00:07:54.907 CXX test/cpp_headers/nvme_zns.o 00:07:54.907 CXX test/cpp_headers/nvmf_cmd.o 00:07:54.907 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:54.907 CXX test/cpp_headers/nvmf.o 00:07:55.166 CXX test/cpp_headers/nvmf_spec.o 00:07:55.166 CXX test/cpp_headers/nvmf_transport.o 00:07:55.166 CXX test/cpp_headers/opal.o 00:07:55.166 CXX test/cpp_headers/opal_spec.o 00:07:55.166 CXX test/cpp_headers/pci_ids.o 00:07:55.166 CXX test/cpp_headers/pipe.o 00:07:55.166 CXX test/cpp_headers/queue.o 00:07:55.166 CXX test/cpp_headers/reduce.o 00:07:55.166 CXX test/cpp_headers/rpc.o 00:07:55.166 CXX test/cpp_headers/scheduler.o 00:07:55.425 CXX test/cpp_headers/scsi.o 00:07:55.425 CXX test/cpp_headers/scsi_spec.o 00:07:55.425 CXX test/cpp_headers/sock.o 00:07:55.425 LINK cuse 00:07:55.425 CXX test/cpp_headers/stdinc.o 00:07:55.425 CXX test/cpp_headers/string.o 00:07:55.425 CXX test/cpp_headers/thread.o 00:07:55.425 CXX test/cpp_headers/trace.o 00:07:55.425 CXX test/cpp_headers/trace_parser.o 00:07:55.425 CXX test/cpp_headers/tree.o 00:07:55.425 CXX test/cpp_headers/ublk.o 00:07:55.425 CXX test/cpp_headers/util.o 00:07:55.425 CXX test/cpp_headers/uuid.o 00:07:55.425 CXX test/cpp_headers/version.o 00:07:55.425 CXX test/cpp_headers/vfio_user_pci.o 00:07:55.425 CXX test/cpp_headers/vfio_user_spec.o 00:07:55.425 CXX test/cpp_headers/vhost.o 00:07:55.683 CXX test/cpp_headers/vmd.o 00:07:55.683 CXX test/cpp_headers/xor.o 00:07:55.683 CXX test/cpp_headers/zipf.o 00:07:55.683 LINK esnap 00:07:56.252 00:07:56.252 real 0m52.418s 00:07:56.252 user 4m50.821s 00:07:56.252 sys 1m6.248s 00:07:56.252 11:52:01 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:07:56.252 11:52:01 -- common/autotest_common.sh@10 -- $ set +x 00:07:56.252 ************************************ 00:07:56.252 END TEST make 00:07:56.252 ************************************ 00:07:56.252 11:52:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:56.252 11:52:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:56.252 11:52:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:56.252 11:52:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:56.252 11:52:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:56.252 11:52:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:56.252 11:52:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:56.252 11:52:01 -- scripts/common.sh@335 -- # IFS=.-: 00:07:56.252 11:52:01 -- scripts/common.sh@335 -- # read -ra ver1 00:07:56.252 11:52:01 -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.252 11:52:01 -- scripts/common.sh@336 -- # read -ra ver2 00:07:56.252 11:52:01 -- scripts/common.sh@337 -- # local 'op=<' 00:07:56.252 11:52:01 -- scripts/common.sh@339 -- # ver1_l=2 00:07:56.252 11:52:01 -- scripts/common.sh@340 -- # ver2_l=1 00:07:56.252 11:52:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:56.252 11:52:01 -- scripts/common.sh@343 -- # case "$op" in 00:07:56.252 11:52:01 -- scripts/common.sh@344 -- # : 1 00:07:56.252 11:52:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:56.252 11:52:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.252 11:52:01 -- scripts/common.sh@364 -- # decimal 1 00:07:56.252 11:52:01 -- scripts/common.sh@352 -- # local d=1 00:07:56.252 11:52:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.252 11:52:01 -- scripts/common.sh@354 -- # echo 1 00:07:56.252 11:52:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:56.252 11:52:01 -- scripts/common.sh@365 -- # decimal 2 00:07:56.252 11:52:01 -- scripts/common.sh@352 -- # local d=2 00:07:56.252 11:52:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.252 11:52:01 -- scripts/common.sh@354 -- # echo 2 00:07:56.252 11:52:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:56.252 11:52:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:56.252 11:52:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:56.252 11:52:01 -- scripts/common.sh@367 -- # return 0 00:07:56.252 11:52:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.252 11:52:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:56.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.252 --rc genhtml_branch_coverage=1 00:07:56.252 --rc genhtml_function_coverage=1 00:07:56.252 --rc genhtml_legend=1 00:07:56.252 --rc geninfo_all_blocks=1 00:07:56.252 --rc geninfo_unexecuted_blocks=1 00:07:56.252 00:07:56.252 ' 00:07:56.252 11:52:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:56.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.252 --rc genhtml_branch_coverage=1 00:07:56.252 --rc genhtml_function_coverage=1 00:07:56.252 --rc genhtml_legend=1 00:07:56.252 --rc geninfo_all_blocks=1 00:07:56.252 --rc geninfo_unexecuted_blocks=1 00:07:56.252 00:07:56.252 ' 00:07:56.252 11:52:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:56.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.252 --rc genhtml_branch_coverage=1 00:07:56.252 --rc genhtml_function_coverage=1 00:07:56.252 --rc genhtml_legend=1 00:07:56.252 --rc geninfo_all_blocks=1 00:07:56.252 --rc geninfo_unexecuted_blocks=1 00:07:56.252 00:07:56.252 ' 00:07:56.252 11:52:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:56.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.252 --rc genhtml_branch_coverage=1 00:07:56.252 --rc genhtml_function_coverage=1 00:07:56.252 --rc genhtml_legend=1 00:07:56.252 --rc geninfo_all_blocks=1 00:07:56.252 --rc geninfo_unexecuted_blocks=1 00:07:56.252 00:07:56.252 ' 00:07:56.252 11:52:01 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:56.252 11:52:01 -- nvmf/common.sh@7 -- # uname -s 00:07:56.252 11:52:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.252 11:52:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.252 11:52:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.252 11:52:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.252 11:52:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.252 11:52:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.252 11:52:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.252 11:52:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.252 11:52:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.252 11:52:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.512 11:52:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:07:56.512 11:52:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:07:56.512 11:52:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.512 11:52:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.512 11:52:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:56.512 11:52:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.512 11:52:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.512 11:52:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.512 11:52:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.512 11:52:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.512 11:52:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.512 11:52:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.512 11:52:01 -- paths/export.sh@5 -- # export PATH 00:07:56.512 11:52:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.512 11:52:01 -- nvmf/common.sh@46 -- # : 0 00:07:56.512 11:52:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:56.512 11:52:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:56.512 11:52:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:56.512 11:52:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.512 11:52:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.512 11:52:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:56.512 11:52:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:56.512 11:52:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:56.512 11:52:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:56.512 11:52:01 -- spdk/autotest.sh@32 -- # uname -s 00:07:56.512 11:52:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:56.512 11:52:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:56.512 11:52:01 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:56.512 11:52:01 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:56.512 11:52:01 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:56.512 11:52:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:56.512 11:52:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:56.512 11:52:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:56.512 11:52:01 -- spdk/autotest.sh@48 -- # udevadm_pid=59751 00:07:56.512 11:52:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:56.512 11:52:01 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:07:56.512 11:52:01 -- spdk/autotest.sh@54 -- # echo 59753 00:07:56.512 11:52:01 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:07:56.512 11:52:01 -- spdk/autotest.sh@56 -- # echo 59756 00:07:56.512 11:52:01 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:07:56.512 11:52:01 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:07:56.512 11:52:01 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:56.512 11:52:01 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:07:56.512 11:52:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.512 11:52:01 -- common/autotest_common.sh@10 -- # set +x 00:07:56.512 11:52:01 -- spdk/autotest.sh@70 -- # create_test_list 00:07:56.512 11:52:01 -- common/autotest_common.sh@746 -- # xtrace_disable 00:07:56.512 11:52:01 -- common/autotest_common.sh@10 -- # set +x 00:07:56.512 11:52:01 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:56.512 11:52:01 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:56.512 11:52:01 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:07:56.512 11:52:01 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:56.512 11:52:01 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:07:56.512 11:52:01 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:07:56.512 11:52:01 -- common/autotest_common.sh@1450 -- # uname 00:07:56.512 11:52:01 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:07:56.512 11:52:01 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:07:56.512 11:52:01 -- common/autotest_common.sh@1470 -- # uname 00:07:56.512 11:52:01 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:07:56.512 11:52:01 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:07:56.512 11:52:01 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:56.512 lcov: LCOV version 1.15 00:07:56.512 11:52:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:04.668 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:08:04.668 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:08:04.668 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:08:04.668 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:08:04.668 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:08:04.668 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:08:26.612 11:52:31 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:08:26.612 11:52:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:26.612 11:52:31 -- common/autotest_common.sh@10 -- # set +x 00:08:26.612 11:52:31 -- spdk/autotest.sh@89 -- # rm -f 00:08:26.612 11:52:31 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:26.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:26.870 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:08:26.870 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:08:26.870 11:52:32 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:08:26.870 11:52:32 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:26.870 11:52:32 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:26.870 11:52:32 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:26.870 11:52:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:26.871 11:52:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:26.871 11:52:32 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:26.871 11:52:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:26.871 11:52:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:26.871 11:52:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:26.871 11:52:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:26.871 11:52:32 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:26.871 11:52:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:26.871 11:52:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:26.871 11:52:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:26.871 11:52:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:26.871 11:52:32 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:26.871 11:52:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:26.871 11:52:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:26.871 11:52:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:26.871 11:52:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:26.871 11:52:32 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:26.871 11:52:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:26.871 11:52:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:26.871 11:52:32 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:08:26.871 11:52:32 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:08:26.871 11:52:32 -- spdk/autotest.sh@108 -- # grep -v p 00:08:26.871 11:52:32 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:08:26.871 11:52:32 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:08:26.871 11:52:32 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:08:26.871 11:52:32 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:08:26.871 11:52:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:26.871 No valid GPT data, bailing 00:08:26.871 11:52:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:26.871 11:52:32 -- scripts/common.sh@393 -- # pt= 00:08:26.871 11:52:32 -- scripts/common.sh@394 -- # return 1 00:08:26.871 11:52:32 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:26.871 1+0 records in 00:08:26.871 1+0 records out 00:08:26.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00452235 s, 232 MB/s 00:08:26.871 11:52:32 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:08:26.871 11:52:32 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:08:26.871 11:52:32 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:08:26.871 11:52:32 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:08:26.871 11:52:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:26.871 No valid GPT data, bailing 00:08:26.871 11:52:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:26.871 11:52:32 -- scripts/common.sh@393 -- # pt= 00:08:26.871 11:52:32 -- scripts/common.sh@394 -- # return 1 00:08:26.871 11:52:32 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:26.871 1+0 records in 00:08:26.871 1+0 records out 00:08:26.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435492 s, 241 MB/s 00:08:26.871 11:52:32 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:08:26.871 11:52:32 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:08:26.871 11:52:32 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:08:26.871 11:52:32 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:08:26.871 11:52:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:27.130 No valid GPT data, bailing 00:08:27.130 11:52:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:27.130 11:52:32 -- scripts/common.sh@393 -- # pt= 00:08:27.130 11:52:32 -- scripts/common.sh@394 -- # return 1 00:08:27.130 11:52:32 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:27.130 1+0 records in 00:08:27.130 1+0 records out 00:08:27.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00556676 s, 188 MB/s 00:08:27.130 11:52:32 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:08:27.130 11:52:32 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:08:27.130 11:52:32 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:08:27.130 11:52:32 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:08:27.130 11:52:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:27.130 No valid GPT data, bailing 00:08:27.130 11:52:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:27.130 11:52:32 -- scripts/common.sh@393 -- # pt= 00:08:27.130 11:52:32 -- scripts/common.sh@394 -- # return 1 00:08:27.130 11:52:32 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:27.130 1+0 records in 00:08:27.130 1+0 records out 00:08:27.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00363566 s, 288 MB/s 00:08:27.130 11:52:32 -- spdk/autotest.sh@116 -- # sync 00:08:27.390 11:52:32 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:27.390 11:52:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:27.390 11:52:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:29.295 11:52:34 -- spdk/autotest.sh@122 -- # uname -s 00:08:29.296 11:52:34 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:08:29.296 11:52:34 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:08:29.296 11:52:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:29.296 11:52:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.296 11:52:34 -- common/autotest_common.sh@10 -- # set +x 00:08:29.296 ************************************ 00:08:29.296 START TEST setup.sh 00:08:29.296 ************************************ 00:08:29.296 11:52:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:08:29.555 * Looking for test storage... 00:08:29.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:29.555 11:52:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:29.555 11:52:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:29.555 11:52:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:29.555 11:52:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:29.555 11:52:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:29.555 11:52:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:29.555 11:52:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:29.555 11:52:34 -- scripts/common.sh@335 -- # IFS=.-: 00:08:29.555 11:52:34 -- scripts/common.sh@335 -- # read -ra ver1 00:08:29.555 11:52:34 -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.555 11:52:34 -- scripts/common.sh@336 -- # read -ra ver2 00:08:29.555 11:52:34 -- scripts/common.sh@337 -- # local 'op=<' 00:08:29.555 11:52:34 -- scripts/common.sh@339 -- # ver1_l=2 00:08:29.555 11:52:34 -- scripts/common.sh@340 -- # ver2_l=1 00:08:29.555 11:52:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:29.555 11:52:34 -- scripts/common.sh@343 -- # case "$op" in 00:08:29.555 11:52:34 -- scripts/common.sh@344 -- # : 1 00:08:29.555 11:52:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:29.555 11:52:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.555 11:52:34 -- scripts/common.sh@364 -- # decimal 1 00:08:29.555 11:52:34 -- scripts/common.sh@352 -- # local d=1 00:08:29.555 11:52:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.555 11:52:34 -- scripts/common.sh@354 -- # echo 1 00:08:29.555 11:52:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:29.555 11:52:34 -- scripts/common.sh@365 -- # decimal 2 00:08:29.555 11:52:34 -- scripts/common.sh@352 -- # local d=2 00:08:29.555 11:52:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.555 11:52:34 -- scripts/common.sh@354 -- # echo 2 00:08:29.555 11:52:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:29.555 11:52:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:29.555 11:52:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:29.555 11:52:35 -- scripts/common.sh@367 -- # return 0 00:08:29.555 11:52:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.555 11:52:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:29.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.555 --rc genhtml_branch_coverage=1 00:08:29.555 --rc genhtml_function_coverage=1 00:08:29.555 --rc genhtml_legend=1 00:08:29.555 --rc geninfo_all_blocks=1 00:08:29.555 --rc geninfo_unexecuted_blocks=1 00:08:29.555 00:08:29.555 ' 00:08:29.555 11:52:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:29.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.555 --rc genhtml_branch_coverage=1 00:08:29.555 --rc genhtml_function_coverage=1 00:08:29.555 --rc genhtml_legend=1 00:08:29.555 --rc geninfo_all_blocks=1 00:08:29.555 --rc geninfo_unexecuted_blocks=1 00:08:29.555 00:08:29.555 ' 00:08:29.555 11:52:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:29.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.555 --rc genhtml_branch_coverage=1 00:08:29.555 --rc genhtml_function_coverage=1 00:08:29.555 --rc genhtml_legend=1 00:08:29.555 --rc geninfo_all_blocks=1 00:08:29.555 --rc geninfo_unexecuted_blocks=1 00:08:29.555 00:08:29.555 ' 00:08:29.555 11:52:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:29.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.555 --rc genhtml_branch_coverage=1 00:08:29.555 --rc genhtml_function_coverage=1 00:08:29.555 --rc genhtml_legend=1 00:08:29.555 --rc geninfo_all_blocks=1 00:08:29.555 --rc geninfo_unexecuted_blocks=1 00:08:29.555 00:08:29.555 ' 00:08:29.555 11:52:35 -- setup/test-setup.sh@10 -- # uname -s 00:08:29.555 11:52:35 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:08:29.555 11:52:35 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:08:29.555 11:52:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:29.555 11:52:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.555 11:52:35 -- common/autotest_common.sh@10 -- # set +x 00:08:29.555 ************************************ 00:08:29.555 START TEST acl 00:08:29.555 ************************************ 00:08:29.555 11:52:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:08:29.825 * Looking for test storage... 00:08:29.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:29.825 11:52:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:29.825 11:52:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:29.825 11:52:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:29.825 11:52:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:29.825 11:52:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:29.825 11:52:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:29.825 11:52:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:29.825 11:52:35 -- scripts/common.sh@335 -- # IFS=.-: 00:08:29.825 11:52:35 -- scripts/common.sh@335 -- # read -ra ver1 00:08:29.825 11:52:35 -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.825 11:52:35 -- scripts/common.sh@336 -- # read -ra ver2 00:08:29.825 11:52:35 -- scripts/common.sh@337 -- # local 'op=<' 00:08:29.825 11:52:35 -- scripts/common.sh@339 -- # ver1_l=2 00:08:29.825 11:52:35 -- scripts/common.sh@340 -- # ver2_l=1 00:08:29.825 11:52:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:29.825 11:52:35 -- scripts/common.sh@343 -- # case "$op" in 00:08:29.825 11:52:35 -- scripts/common.sh@344 -- # : 1 00:08:29.825 11:52:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:29.825 11:52:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.825 11:52:35 -- scripts/common.sh@364 -- # decimal 1 00:08:29.825 11:52:35 -- scripts/common.sh@352 -- # local d=1 00:08:29.825 11:52:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.825 11:52:35 -- scripts/common.sh@354 -- # echo 1 00:08:29.825 11:52:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:29.825 11:52:35 -- scripts/common.sh@365 -- # decimal 2 00:08:29.825 11:52:35 -- scripts/common.sh@352 -- # local d=2 00:08:29.825 11:52:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.825 11:52:35 -- scripts/common.sh@354 -- # echo 2 00:08:29.825 11:52:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:29.825 11:52:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:29.825 11:52:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:29.825 11:52:35 -- scripts/common.sh@367 -- # return 0 00:08:29.825 11:52:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.825 11:52:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:29.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.825 --rc genhtml_branch_coverage=1 00:08:29.825 --rc genhtml_function_coverage=1 00:08:29.825 --rc genhtml_legend=1 00:08:29.825 --rc geninfo_all_blocks=1 00:08:29.825 --rc geninfo_unexecuted_blocks=1 00:08:29.825 00:08:29.825 ' 00:08:29.825 11:52:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:29.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.825 --rc genhtml_branch_coverage=1 00:08:29.825 --rc genhtml_function_coverage=1 00:08:29.825 --rc genhtml_legend=1 00:08:29.825 --rc geninfo_all_blocks=1 00:08:29.825 --rc geninfo_unexecuted_blocks=1 00:08:29.825 00:08:29.825 ' 00:08:29.825 11:52:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:29.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.825 --rc genhtml_branch_coverage=1 00:08:29.825 --rc genhtml_function_coverage=1 00:08:29.825 --rc genhtml_legend=1 00:08:29.825 --rc geninfo_all_blocks=1 00:08:29.825 --rc geninfo_unexecuted_blocks=1 00:08:29.825 00:08:29.825 ' 00:08:29.825 11:52:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:29.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.825 --rc genhtml_branch_coverage=1 00:08:29.825 --rc genhtml_function_coverage=1 00:08:29.825 --rc genhtml_legend=1 00:08:29.825 --rc geninfo_all_blocks=1 00:08:29.825 --rc geninfo_unexecuted_blocks=1 00:08:29.825 00:08:29.825 ' 00:08:29.825 11:52:35 -- setup/acl.sh@10 -- # get_zoned_devs 00:08:29.825 11:52:35 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:29.825 11:52:35 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:29.825 11:52:35 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:29.825 11:52:35 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:29.825 11:52:35 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:29.825 11:52:35 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:29.825 11:52:35 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:29.825 11:52:35 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:29.825 11:52:35 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:29.825 11:52:35 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:29.825 11:52:35 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:29.825 11:52:35 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:29.825 11:52:35 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:29.825 11:52:35 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:29.825 11:52:35 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:29.825 11:52:35 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:29.825 11:52:35 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:29.825 11:52:35 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:29.825 11:52:35 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:29.825 11:52:35 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:29.825 11:52:35 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:29.825 11:52:35 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:29.825 11:52:35 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:29.825 11:52:35 -- setup/acl.sh@12 -- # devs=() 00:08:29.825 11:52:35 -- setup/acl.sh@12 -- # declare -a devs 00:08:29.825 11:52:35 -- setup/acl.sh@13 -- # drivers=() 00:08:29.825 11:52:35 -- setup/acl.sh@13 -- # declare -A drivers 00:08:29.825 11:52:35 -- setup/acl.sh@51 -- # setup reset 00:08:29.825 11:52:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:29.825 11:52:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:30.761 11:52:35 -- setup/acl.sh@52 -- # collect_setup_devs 00:08:30.761 11:52:35 -- setup/acl.sh@16 -- # local dev driver 00:08:30.761 11:52:35 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.761 11:52:35 -- setup/acl.sh@15 -- # setup output status 00:08:30.761 11:52:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:30.761 11:52:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:30.761 Hugepages 00:08:30.761 node hugesize free / total 00:08:30.761 11:52:36 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:08:30.761 11:52:36 -- setup/acl.sh@19 -- # continue 00:08:30.761 11:52:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.761 00:08:30.761 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:30.761 11:52:36 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:08:30.761 11:52:36 -- setup/acl.sh@19 -- # continue 00:08:30.761 11:52:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:30.761 11:52:36 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:08:30.761 11:52:36 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:08:30.761 11:52:36 -- setup/acl.sh@20 -- # continue 00:08:30.761 11:52:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:31.020 11:52:36 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:08:31.020 11:52:36 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:31.020 11:52:36 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:08:31.020 11:52:36 -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:31.020 11:52:36 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:31.020 11:52:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:31.020 11:52:36 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:08:31.020 11:52:36 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:31.020 11:52:36 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:08:31.020 11:52:36 -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:31.020 11:52:36 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:31.020 11:52:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:31.020 11:52:36 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:08:31.020 11:52:36 -- setup/acl.sh@54 -- # run_test denied denied 00:08:31.020 11:52:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:31.020 11:52:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.020 11:52:36 -- common/autotest_common.sh@10 -- # set +x 00:08:31.020 ************************************ 00:08:31.020 START TEST denied 00:08:31.020 ************************************ 00:08:31.020 11:52:36 -- common/autotest_common.sh@1114 -- # denied 00:08:31.020 11:52:36 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:08:31.020 11:52:36 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:08:31.020 11:52:36 -- setup/acl.sh@38 -- # setup output config 00:08:31.020 11:52:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:31.020 11:52:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:31.957 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:08:31.958 11:52:37 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:08:31.958 11:52:37 -- setup/acl.sh@28 -- # local dev driver 00:08:31.958 11:52:37 -- setup/acl.sh@30 -- # for dev in "$@" 00:08:31.958 11:52:37 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:08:31.958 11:52:37 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:08:31.958 11:52:37 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:31.958 11:52:37 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:31.958 11:52:37 -- setup/acl.sh@41 -- # setup reset 00:08:31.958 11:52:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:31.958 11:52:37 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:32.526 00:08:32.526 real 0m1.547s 00:08:32.526 user 0m0.624s 00:08:32.526 sys 0m0.869s 00:08:32.526 11:52:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.526 ************************************ 00:08:32.526 END TEST denied 00:08:32.526 11:52:37 -- common/autotest_common.sh@10 -- # set +x 00:08:32.526 ************************************ 00:08:32.526 11:52:37 -- setup/acl.sh@55 -- # run_test allowed allowed 00:08:32.526 11:52:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.526 11:52:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.526 11:52:37 -- common/autotest_common.sh@10 -- # set +x 00:08:32.526 ************************************ 00:08:32.526 START TEST allowed 00:08:32.526 ************************************ 00:08:32.526 11:52:37 -- common/autotest_common.sh@1114 -- # allowed 00:08:32.526 11:52:37 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:08:32.526 11:52:37 -- setup/acl.sh@45 -- # setup output config 00:08:32.526 11:52:37 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:08:32.526 11:52:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:32.526 11:52:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:33.462 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:33.462 11:52:38 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:08:33.462 11:52:38 -- setup/acl.sh@28 -- # local dev driver 00:08:33.462 11:52:38 -- setup/acl.sh@30 -- # for dev in "$@" 00:08:33.462 11:52:38 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:08:33.462 11:52:38 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:08:33.462 11:52:38 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:33.462 11:52:38 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:33.462 11:52:38 -- setup/acl.sh@48 -- # setup reset 00:08:33.462 11:52:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:33.462 11:52:38 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:34.425 00:08:34.425 real 0m1.608s 00:08:34.425 user 0m0.708s 00:08:34.425 sys 0m0.894s 00:08:34.425 11:52:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.425 11:52:39 -- common/autotest_common.sh@10 -- # set +x 00:08:34.425 ************************************ 00:08:34.425 END TEST allowed 00:08:34.425 ************************************ 00:08:34.425 00:08:34.425 real 0m4.616s 00:08:34.425 user 0m2.030s 00:08:34.425 sys 0m2.553s 00:08:34.425 11:52:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.425 11:52:39 -- common/autotest_common.sh@10 -- # set +x 00:08:34.425 ************************************ 00:08:34.425 END TEST acl 00:08:34.425 ************************************ 00:08:34.425 11:52:39 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:08:34.425 11:52:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.425 11:52:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.425 11:52:39 -- common/autotest_common.sh@10 -- # set +x 00:08:34.425 ************************************ 00:08:34.425 START TEST hugepages 00:08:34.425 ************************************ 00:08:34.425 11:52:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:08:34.425 * Looking for test storage... 00:08:34.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:34.425 11:52:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:34.425 11:52:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:34.425 11:52:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:34.425 11:52:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:34.425 11:52:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:34.425 11:52:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:34.425 11:52:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:34.425 11:52:39 -- scripts/common.sh@335 -- # IFS=.-: 00:08:34.425 11:52:39 -- scripts/common.sh@335 -- # read -ra ver1 00:08:34.425 11:52:39 -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.425 11:52:39 -- scripts/common.sh@336 -- # read -ra ver2 00:08:34.425 11:52:39 -- scripts/common.sh@337 -- # local 'op=<' 00:08:34.425 11:52:39 -- scripts/common.sh@339 -- # ver1_l=2 00:08:34.425 11:52:39 -- scripts/common.sh@340 -- # ver2_l=1 00:08:34.425 11:52:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:34.425 11:52:39 -- scripts/common.sh@343 -- # case "$op" in 00:08:34.425 11:52:39 -- scripts/common.sh@344 -- # : 1 00:08:34.425 11:52:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:34.425 11:52:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.425 11:52:39 -- scripts/common.sh@364 -- # decimal 1 00:08:34.425 11:52:39 -- scripts/common.sh@352 -- # local d=1 00:08:34.425 11:52:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.425 11:52:39 -- scripts/common.sh@354 -- # echo 1 00:08:34.425 11:52:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:34.425 11:52:39 -- scripts/common.sh@365 -- # decimal 2 00:08:34.425 11:52:39 -- scripts/common.sh@352 -- # local d=2 00:08:34.425 11:52:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.425 11:52:39 -- scripts/common.sh@354 -- # echo 2 00:08:34.425 11:52:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:34.425 11:52:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:34.425 11:52:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:34.425 11:52:39 -- scripts/common.sh@367 -- # return 0 00:08:34.425 11:52:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.425 11:52:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:34.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.425 --rc genhtml_branch_coverage=1 00:08:34.425 --rc genhtml_function_coverage=1 00:08:34.425 --rc genhtml_legend=1 00:08:34.425 --rc geninfo_all_blocks=1 00:08:34.425 --rc geninfo_unexecuted_blocks=1 00:08:34.425 00:08:34.425 ' 00:08:34.425 11:52:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:34.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.425 --rc genhtml_branch_coverage=1 00:08:34.425 --rc genhtml_function_coverage=1 00:08:34.425 --rc genhtml_legend=1 00:08:34.425 --rc geninfo_all_blocks=1 00:08:34.425 --rc geninfo_unexecuted_blocks=1 00:08:34.425 00:08:34.425 ' 00:08:34.425 11:52:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:34.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.425 --rc genhtml_branch_coverage=1 00:08:34.425 --rc genhtml_function_coverage=1 00:08:34.425 --rc genhtml_legend=1 00:08:34.425 --rc geninfo_all_blocks=1 00:08:34.425 --rc geninfo_unexecuted_blocks=1 00:08:34.425 00:08:34.425 ' 00:08:34.425 11:52:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:34.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.425 --rc genhtml_branch_coverage=1 00:08:34.425 --rc genhtml_function_coverage=1 00:08:34.425 --rc genhtml_legend=1 00:08:34.425 --rc geninfo_all_blocks=1 00:08:34.425 --rc geninfo_unexecuted_blocks=1 00:08:34.425 00:08:34.425 ' 00:08:34.425 11:52:39 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:08:34.425 11:52:39 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:08:34.425 11:52:39 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:08:34.425 11:52:39 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:08:34.425 11:52:39 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:08:34.425 11:52:39 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:08:34.425 11:52:39 -- setup/common.sh@17 -- # local get=Hugepagesize 00:08:34.425 11:52:39 -- setup/common.sh@18 -- # local node= 00:08:34.425 11:52:39 -- setup/common.sh@19 -- # local var val 00:08:34.425 11:52:39 -- setup/common.sh@20 -- # local mem_f mem 00:08:34.425 11:52:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:34.425 11:52:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:34.425 11:52:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:34.425 11:52:39 -- setup/common.sh@28 -- # mapfile -t mem 00:08:34.425 11:52:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:34.425 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.425 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.425 11:52:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 4840568 kB' 'MemAvailable: 7342164 kB' 'Buffers: 2684 kB' 'Cached: 2706144 kB' 'SwapCached: 0 kB' 'Active: 454988 kB' 'Inactive: 2370456 kB' 'Active(anon): 127128 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370456 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118560 kB' 'Mapped: 50772 kB' 'Shmem: 10512 kB' 'KReclaimable: 80456 kB' 'Slab: 179472 kB' 'SReclaimable: 80456 kB' 'SUnreclaim: 99016 kB' 'KernelStack: 6800 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 321772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:34.425 11:52:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.426 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.426 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # continue 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.427 11:52:39 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.427 11:52:39 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:34.427 11:52:39 -- setup/common.sh@33 -- # echo 2048 00:08:34.427 11:52:39 -- setup/common.sh@33 -- # return 0 00:08:34.427 11:52:39 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:08:34.427 11:52:39 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:08:34.427 11:52:39 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:08:34.427 11:52:39 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:08:34.427 11:52:39 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:08:34.427 11:52:39 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:08:34.427 11:52:39 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:08:34.427 11:52:39 -- setup/hugepages.sh@207 -- # get_nodes 00:08:34.427 11:52:39 -- setup/hugepages.sh@27 -- # local node 00:08:34.427 11:52:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:34.427 11:52:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:08:34.427 11:52:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:34.427 11:52:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:34.427 11:52:39 -- setup/hugepages.sh@208 -- # clear_hp 00:08:34.427 11:52:39 -- setup/hugepages.sh@37 -- # local node hp 00:08:34.427 11:52:39 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:34.427 11:52:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:34.427 11:52:39 -- setup/hugepages.sh@41 -- # echo 0 00:08:34.427 11:52:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:34.427 11:52:39 -- setup/hugepages.sh@41 -- # echo 0 00:08:34.686 11:52:39 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:08:34.686 11:52:39 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:08:34.686 11:52:39 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:08:34.686 11:52:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.686 11:52:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.686 11:52:39 -- common/autotest_common.sh@10 -- # set +x 00:08:34.686 ************************************ 00:08:34.686 START TEST default_setup 00:08:34.686 ************************************ 00:08:34.686 11:52:39 -- common/autotest_common.sh@1114 -- # default_setup 00:08:34.686 11:52:39 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:08:34.686 11:52:39 -- setup/hugepages.sh@49 -- # local size=2097152 00:08:34.686 11:52:39 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:34.686 11:52:39 -- setup/hugepages.sh@51 -- # shift 00:08:34.686 11:52:39 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:34.686 11:52:39 -- setup/hugepages.sh@52 -- # local node_ids 00:08:34.686 11:52:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:34.686 11:52:39 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:34.686 11:52:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:34.686 11:52:39 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:34.686 11:52:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:34.686 11:52:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:34.686 11:52:39 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:34.686 11:52:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:34.686 11:52:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:34.686 11:52:39 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:34.686 11:52:39 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:34.686 11:52:39 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:34.686 11:52:39 -- setup/hugepages.sh@73 -- # return 0 00:08:34.686 11:52:39 -- setup/hugepages.sh@137 -- # setup output 00:08:34.686 11:52:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:34.686 11:52:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:35.254 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:35.254 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:35.516 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:35.516 11:52:40 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:08:35.516 11:52:40 -- setup/hugepages.sh@89 -- # local node 00:08:35.516 11:52:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:35.516 11:52:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:35.516 11:52:40 -- setup/hugepages.sh@92 -- # local surp 00:08:35.516 11:52:40 -- setup/hugepages.sh@93 -- # local resv 00:08:35.516 11:52:40 -- setup/hugepages.sh@94 -- # local anon 00:08:35.516 11:52:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:35.516 11:52:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:35.516 11:52:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:35.516 11:52:40 -- setup/common.sh@18 -- # local node= 00:08:35.516 11:52:40 -- setup/common.sh@19 -- # local var val 00:08:35.516 11:52:40 -- setup/common.sh@20 -- # local mem_f mem 00:08:35.516 11:52:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:35.516 11:52:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:35.516 11:52:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:35.516 11:52:40 -- setup/common.sh@28 -- # mapfile -t mem 00:08:35.516 11:52:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.516 11:52:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6958852 kB' 'MemAvailable: 9460300 kB' 'Buffers: 2684 kB' 'Cached: 2706136 kB' 'SwapCached: 0 kB' 'Active: 456548 kB' 'Inactive: 2370472 kB' 'Active(anon): 128688 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119776 kB' 'Mapped: 50892 kB' 'Shmem: 10488 kB' 'KReclaimable: 80124 kB' 'Slab: 179068 kB' 'SReclaimable: 80124 kB' 'SUnreclaim: 98944 kB' 'KernelStack: 6768 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.516 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.516 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.517 11:52:40 -- setup/common.sh@33 -- # echo 0 00:08:35.517 11:52:40 -- setup/common.sh@33 -- # return 0 00:08:35.517 11:52:40 -- setup/hugepages.sh@97 -- # anon=0 00:08:35.517 11:52:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:35.517 11:52:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:35.517 11:52:40 -- setup/common.sh@18 -- # local node= 00:08:35.517 11:52:40 -- setup/common.sh@19 -- # local var val 00:08:35.517 11:52:40 -- setup/common.sh@20 -- # local mem_f mem 00:08:35.517 11:52:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:35.517 11:52:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:35.517 11:52:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:35.517 11:52:40 -- setup/common.sh@28 -- # mapfile -t mem 00:08:35.517 11:52:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6959112 kB' 'MemAvailable: 9460560 kB' 'Buffers: 2684 kB' 'Cached: 2706136 kB' 'SwapCached: 0 kB' 'Active: 456448 kB' 'Inactive: 2370472 kB' 'Active(anon): 128588 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119956 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80124 kB' 'Slab: 179072 kB' 'SReclaimable: 80124 kB' 'SUnreclaim: 98948 kB' 'KernelStack: 6768 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.517 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.517 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.518 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.518 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.519 11:52:40 -- setup/common.sh@33 -- # echo 0 00:08:35.519 11:52:40 -- setup/common.sh@33 -- # return 0 00:08:35.519 11:52:40 -- setup/hugepages.sh@99 -- # surp=0 00:08:35.519 11:52:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:35.519 11:52:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:35.519 11:52:40 -- setup/common.sh@18 -- # local node= 00:08:35.519 11:52:40 -- setup/common.sh@19 -- # local var val 00:08:35.519 11:52:40 -- setup/common.sh@20 -- # local mem_f mem 00:08:35.519 11:52:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:35.519 11:52:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:35.519 11:52:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:35.519 11:52:40 -- setup/common.sh@28 -- # mapfile -t mem 00:08:35.519 11:52:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6959112 kB' 'MemAvailable: 9460552 kB' 'Buffers: 2684 kB' 'Cached: 2706136 kB' 'SwapCached: 0 kB' 'Active: 456604 kB' 'Inactive: 2370472 kB' 'Active(anon): 128744 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119832 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80108 kB' 'Slab: 179056 kB' 'SReclaimable: 80108 kB' 'SUnreclaim: 98948 kB' 'KernelStack: 6752 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.519 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.519 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.520 11:52:40 -- setup/common.sh@33 -- # echo 0 00:08:35.520 11:52:40 -- setup/common.sh@33 -- # return 0 00:08:35.520 nr_hugepages=1024 00:08:35.520 resv_hugepages=0 00:08:35.520 11:52:40 -- setup/hugepages.sh@100 -- # resv=0 00:08:35.520 11:52:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:35.520 11:52:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:35.520 surplus_hugepages=0 00:08:35.520 11:52:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:35.520 anon_hugepages=0 00:08:35.520 11:52:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:35.520 11:52:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:35.520 11:52:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:35.520 11:52:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:35.520 11:52:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:35.520 11:52:40 -- setup/common.sh@18 -- # local node= 00:08:35.520 11:52:40 -- setup/common.sh@19 -- # local var val 00:08:35.520 11:52:40 -- setup/common.sh@20 -- # local mem_f mem 00:08:35.520 11:52:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:35.520 11:52:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:35.520 11:52:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:35.520 11:52:40 -- setup/common.sh@28 -- # mapfile -t mem 00:08:35.520 11:52:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6959112 kB' 'MemAvailable: 9460552 kB' 'Buffers: 2684 kB' 'Cached: 2706136 kB' 'SwapCached: 0 kB' 'Active: 456528 kB' 'Inactive: 2370472 kB' 'Active(anon): 128668 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119752 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80108 kB' 'Slab: 179056 kB' 'SReclaimable: 80108 kB' 'SUnreclaim: 98948 kB' 'KernelStack: 6752 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.520 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.520 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.521 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.521 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:40 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.522 11:52:40 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.522 11:52:41 -- setup/common.sh@33 -- # echo 1024 00:08:35.522 11:52:41 -- setup/common.sh@33 -- # return 0 00:08:35.522 11:52:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:35.522 11:52:41 -- setup/hugepages.sh@112 -- # get_nodes 00:08:35.522 11:52:41 -- setup/hugepages.sh@27 -- # local node 00:08:35.522 11:52:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:35.522 11:52:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:35.522 11:52:41 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:35.522 11:52:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:35.522 11:52:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:35.522 11:52:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:35.522 11:52:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:35.522 11:52:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:35.522 11:52:41 -- setup/common.sh@18 -- # local node=0 00:08:35.522 11:52:41 -- setup/common.sh@19 -- # local var val 00:08:35.522 11:52:41 -- setup/common.sh@20 -- # local mem_f mem 00:08:35.522 11:52:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:35.522 11:52:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:35.522 11:52:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:35.522 11:52:41 -- setup/common.sh@28 -- # mapfile -t mem 00:08:35.522 11:52:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6959788 kB' 'MemUsed: 5279328 kB' 'SwapCached: 0 kB' 'Active: 456592 kB' 'Inactive: 2370472 kB' 'Active(anon): 128732 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 2708820 kB' 'Mapped: 50772 kB' 'AnonPages: 119864 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80108 kB' 'Slab: 179056 kB' 'SReclaimable: 80108 kB' 'SUnreclaim: 98948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.522 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.522 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.782 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.782 11:52:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.782 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.782 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.782 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.782 11:52:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.782 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.782 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.782 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # continue 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.783 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.783 11:52:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.783 11:52:41 -- setup/common.sh@33 -- # echo 0 00:08:35.783 11:52:41 -- setup/common.sh@33 -- # return 0 00:08:35.783 11:52:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:35.783 11:52:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:35.783 node0=1024 expecting 1024 00:08:35.783 ************************************ 00:08:35.783 END TEST default_setup 00:08:35.783 ************************************ 00:08:35.783 11:52:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:35.783 11:52:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:35.783 11:52:41 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:35.783 11:52:41 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:35.783 00:08:35.783 real 0m1.089s 00:08:35.783 user 0m0.491s 00:08:35.783 sys 0m0.507s 00:08:35.783 11:52:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.783 11:52:41 -- common/autotest_common.sh@10 -- # set +x 00:08:35.783 11:52:41 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:08:35.783 11:52:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.783 11:52:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.783 11:52:41 -- common/autotest_common.sh@10 -- # set +x 00:08:35.783 ************************************ 00:08:35.783 START TEST per_node_1G_alloc 00:08:35.783 ************************************ 00:08:35.783 11:52:41 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:08:35.783 11:52:41 -- setup/hugepages.sh@143 -- # local IFS=, 00:08:35.783 11:52:41 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:08:35.783 11:52:41 -- setup/hugepages.sh@49 -- # local size=1048576 00:08:35.783 11:52:41 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:35.783 11:52:41 -- setup/hugepages.sh@51 -- # shift 00:08:35.783 11:52:41 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:35.783 11:52:41 -- setup/hugepages.sh@52 -- # local node_ids 00:08:35.783 11:52:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:35.783 11:52:41 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:35.783 11:52:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:35.783 11:52:41 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:35.783 11:52:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:35.783 11:52:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:35.783 11:52:41 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:35.783 11:52:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:35.783 11:52:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:35.783 11:52:41 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:35.783 11:52:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:35.783 11:52:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:08:35.783 11:52:41 -- setup/hugepages.sh@73 -- # return 0 00:08:35.783 11:52:41 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:08:35.783 11:52:41 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:08:35.783 11:52:41 -- setup/hugepages.sh@146 -- # setup output 00:08:35.783 11:52:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:35.783 11:52:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:36.044 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:36.044 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:36.044 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:36.044 11:52:41 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:08:36.044 11:52:41 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:08:36.044 11:52:41 -- setup/hugepages.sh@89 -- # local node 00:08:36.044 11:52:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:36.044 11:52:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:36.044 11:52:41 -- setup/hugepages.sh@92 -- # local surp 00:08:36.044 11:52:41 -- setup/hugepages.sh@93 -- # local resv 00:08:36.044 11:52:41 -- setup/hugepages.sh@94 -- # local anon 00:08:36.044 11:52:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:36.044 11:52:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:36.044 11:52:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:36.044 11:52:41 -- setup/common.sh@18 -- # local node= 00:08:36.044 11:52:41 -- setup/common.sh@19 -- # local var val 00:08:36.044 11:52:41 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.044 11:52:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.044 11:52:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:36.044 11:52:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:36.044 11:52:41 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.044 11:52:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8006392 kB' 'MemAvailable: 10507828 kB' 'Buffers: 2684 kB' 'Cached: 2706136 kB' 'SwapCached: 0 kB' 'Active: 456560 kB' 'Inactive: 2370472 kB' 'Active(anon): 128700 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119820 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179068 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98964 kB' 'KernelStack: 6760 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.044 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.044 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.045 11:52:41 -- setup/common.sh@33 -- # echo 0 00:08:36.045 11:52:41 -- setup/common.sh@33 -- # return 0 00:08:36.045 11:52:41 -- setup/hugepages.sh@97 -- # anon=0 00:08:36.045 11:52:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:36.045 11:52:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:36.045 11:52:41 -- setup/common.sh@18 -- # local node= 00:08:36.045 11:52:41 -- setup/common.sh@19 -- # local var val 00:08:36.045 11:52:41 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.045 11:52:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.045 11:52:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:36.045 11:52:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:36.045 11:52:41 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.045 11:52:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8006392 kB' 'MemAvailable: 10507828 kB' 'Buffers: 2684 kB' 'Cached: 2706136 kB' 'SwapCached: 0 kB' 'Active: 456748 kB' 'Inactive: 2370472 kB' 'Active(anon): 128888 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120008 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179060 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98956 kB' 'KernelStack: 6728 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.045 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.045 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.308 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.308 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.309 11:52:41 -- setup/common.sh@33 -- # echo 0 00:08:36.309 11:52:41 -- setup/common.sh@33 -- # return 0 00:08:36.309 11:52:41 -- setup/hugepages.sh@99 -- # surp=0 00:08:36.309 11:52:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:36.309 11:52:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:36.309 11:52:41 -- setup/common.sh@18 -- # local node= 00:08:36.309 11:52:41 -- setup/common.sh@19 -- # local var val 00:08:36.309 11:52:41 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.309 11:52:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.309 11:52:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:36.309 11:52:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:36.309 11:52:41 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.309 11:52:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8006392 kB' 'MemAvailable: 10507828 kB' 'Buffers: 2684 kB' 'Cached: 2706136 kB' 'SwapCached: 0 kB' 'Active: 456448 kB' 'Inactive: 2370472 kB' 'Active(anon): 128588 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119672 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179056 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98952 kB' 'KernelStack: 6752 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.309 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.309 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.310 11:52:41 -- setup/common.sh@33 -- # echo 0 00:08:36.310 11:52:41 -- setup/common.sh@33 -- # return 0 00:08:36.310 11:52:41 -- setup/hugepages.sh@100 -- # resv=0 00:08:36.310 nr_hugepages=512 00:08:36.310 11:52:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:08:36.310 resv_hugepages=0 00:08:36.310 11:52:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:36.310 surplus_hugepages=0 00:08:36.310 11:52:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:36.310 anon_hugepages=0 00:08:36.310 11:52:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:36.310 11:52:41 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:36.310 11:52:41 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:08:36.310 11:52:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:36.310 11:52:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:36.310 11:52:41 -- setup/common.sh@18 -- # local node= 00:08:36.310 11:52:41 -- setup/common.sh@19 -- # local var val 00:08:36.310 11:52:41 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.310 11:52:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.310 11:52:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:36.310 11:52:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:36.310 11:52:41 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.310 11:52:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.310 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.310 11:52:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8006392 kB' 'MemAvailable: 10507828 kB' 'Buffers: 2684 kB' 'Cached: 2706136 kB' 'SwapCached: 0 kB' 'Active: 456740 kB' 'Inactive: 2370472 kB' 'Active(anon): 128880 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119964 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179056 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98952 kB' 'KernelStack: 6768 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.310 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.311 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.311 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.312 11:52:41 -- setup/common.sh@33 -- # echo 512 00:08:36.312 11:52:41 -- setup/common.sh@33 -- # return 0 00:08:36.312 11:52:41 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:36.312 11:52:41 -- setup/hugepages.sh@112 -- # get_nodes 00:08:36.312 11:52:41 -- setup/hugepages.sh@27 -- # local node 00:08:36.312 11:52:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:36.312 11:52:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:36.312 11:52:41 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:36.312 11:52:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:36.312 11:52:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:36.312 11:52:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:36.312 11:52:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:36.312 11:52:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:36.312 11:52:41 -- setup/common.sh@18 -- # local node=0 00:08:36.312 11:52:41 -- setup/common.sh@19 -- # local var val 00:08:36.312 11:52:41 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.312 11:52:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.312 11:52:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:36.312 11:52:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:36.312 11:52:41 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.312 11:52:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8007452 kB' 'MemUsed: 4231664 kB' 'SwapCached: 0 kB' 'Active: 456628 kB' 'Inactive: 2370472 kB' 'Active(anon): 128768 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 2708820 kB' 'Mapped: 50772 kB' 'AnonPages: 119884 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80104 kB' 'Slab: 179056 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.312 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.312 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # continue 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.313 11:52:41 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.313 11:52:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.313 11:52:41 -- setup/common.sh@33 -- # echo 0 00:08:36.313 11:52:41 -- setup/common.sh@33 -- # return 0 00:08:36.313 11:52:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:36.313 11:52:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:36.313 11:52:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:36.313 11:52:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:36.313 node0=512 expecting 512 00:08:36.313 11:52:41 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:36.313 11:52:41 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:36.313 00:08:36.313 real 0m0.567s 00:08:36.313 user 0m0.278s 00:08:36.313 sys 0m0.324s 00:08:36.313 11:52:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.313 11:52:41 -- common/autotest_common.sh@10 -- # set +x 00:08:36.313 ************************************ 00:08:36.313 END TEST per_node_1G_alloc 00:08:36.313 ************************************ 00:08:36.313 11:52:41 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:08:36.313 11:52:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:36.313 11:52:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.313 11:52:41 -- common/autotest_common.sh@10 -- # set +x 00:08:36.313 ************************************ 00:08:36.313 START TEST even_2G_alloc 00:08:36.313 ************************************ 00:08:36.313 11:52:41 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:08:36.313 11:52:41 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:08:36.313 11:52:41 -- setup/hugepages.sh@49 -- # local size=2097152 00:08:36.313 11:52:41 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:36.313 11:52:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:36.313 11:52:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:36.313 11:52:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:36.313 11:52:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:36.313 11:52:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:36.313 11:52:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:36.313 11:52:41 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:36.313 11:52:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:36.313 11:52:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:36.313 11:52:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:36.313 11:52:41 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:36.313 11:52:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:36.313 11:52:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:08:36.313 11:52:41 -- setup/hugepages.sh@83 -- # : 0 00:08:36.313 11:52:41 -- setup/hugepages.sh@84 -- # : 0 00:08:36.313 11:52:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:36.313 11:52:41 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:08:36.313 11:52:41 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:08:36.313 11:52:41 -- setup/hugepages.sh@153 -- # setup output 00:08:36.313 11:52:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:36.313 11:52:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:36.572 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:36.835 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:36.835 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:36.835 11:52:42 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:08:36.835 11:52:42 -- setup/hugepages.sh@89 -- # local node 00:08:36.835 11:52:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:36.835 11:52:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:36.835 11:52:42 -- setup/hugepages.sh@92 -- # local surp 00:08:36.835 11:52:42 -- setup/hugepages.sh@93 -- # local resv 00:08:36.835 11:52:42 -- setup/hugepages.sh@94 -- # local anon 00:08:36.835 11:52:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:36.835 11:52:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:36.835 11:52:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:36.835 11:52:42 -- setup/common.sh@18 -- # local node= 00:08:36.835 11:52:42 -- setup/common.sh@19 -- # local var val 00:08:36.835 11:52:42 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.835 11:52:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.835 11:52:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:36.835 11:52:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:36.835 11:52:42 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.835 11:52:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.835 11:52:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6967316 kB' 'MemAvailable: 9468752 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456904 kB' 'Inactive: 2370472 kB' 'Active(anon): 129044 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120172 kB' 'Mapped: 50884 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179084 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98980 kB' 'KernelStack: 6772 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.835 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.835 11:52:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.836 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.836 11:52:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.836 11:52:42 -- setup/common.sh@33 -- # echo 0 00:08:36.836 11:52:42 -- setup/common.sh@33 -- # return 0 00:08:36.836 11:52:42 -- setup/hugepages.sh@97 -- # anon=0 00:08:36.836 11:52:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:36.836 11:52:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:36.836 11:52:42 -- setup/common.sh@18 -- # local node= 00:08:36.836 11:52:42 -- setup/common.sh@19 -- # local var val 00:08:36.836 11:52:42 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.836 11:52:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.836 11:52:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:36.836 11:52:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:36.837 11:52:42 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.837 11:52:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6967828 kB' 'MemAvailable: 9469264 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456496 kB' 'Inactive: 2370472 kB' 'Active(anon): 128636 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119744 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179076 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98972 kB' 'KernelStack: 6712 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.837 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.837 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.838 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.838 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.838 11:52:42 -- setup/common.sh@33 -- # echo 0 00:08:36.838 11:52:42 -- setup/common.sh@33 -- # return 0 00:08:36.838 11:52:42 -- setup/hugepages.sh@99 -- # surp=0 00:08:36.838 11:52:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:36.838 11:52:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:36.838 11:52:42 -- setup/common.sh@18 -- # local node= 00:08:36.838 11:52:42 -- setup/common.sh@19 -- # local var val 00:08:36.838 11:52:42 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.838 11:52:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.839 11:52:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:36.839 11:52:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:36.839 11:52:42 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.839 11:52:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6967576 kB' 'MemAvailable: 9469012 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456700 kB' 'Inactive: 2370472 kB' 'Active(anon): 128840 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119980 kB' 'Mapped: 50824 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179068 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98964 kB' 'KernelStack: 6744 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.839 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.839 11:52:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.840 11:52:42 -- setup/common.sh@33 -- # echo 0 00:08:36.840 11:52:42 -- setup/common.sh@33 -- # return 0 00:08:36.840 11:52:42 -- setup/hugepages.sh@100 -- # resv=0 00:08:36.840 nr_hugepages=1024 00:08:36.840 11:52:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:36.840 resv_hugepages=0 00:08:36.840 11:52:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:36.840 surplus_hugepages=0 00:08:36.840 11:52:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:36.840 anon_hugepages=0 00:08:36.840 11:52:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:36.840 11:52:42 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:36.840 11:52:42 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:36.840 11:52:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:36.840 11:52:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:36.840 11:52:42 -- setup/common.sh@18 -- # local node= 00:08:36.840 11:52:42 -- setup/common.sh@19 -- # local var val 00:08:36.840 11:52:42 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.840 11:52:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.840 11:52:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:36.840 11:52:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:36.840 11:52:42 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.840 11:52:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6968228 kB' 'MemAvailable: 9469664 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456380 kB' 'Inactive: 2370472 kB' 'Active(anon): 128520 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119668 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179080 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98976 kB' 'KernelStack: 6768 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.840 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.840 11:52:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.841 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.841 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.842 11:52:42 -- setup/common.sh@33 -- # echo 1024 00:08:36.842 11:52:42 -- setup/common.sh@33 -- # return 0 00:08:36.842 11:52:42 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:36.842 11:52:42 -- setup/hugepages.sh@112 -- # get_nodes 00:08:36.842 11:52:42 -- setup/hugepages.sh@27 -- # local node 00:08:36.842 11:52:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:36.842 11:52:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:36.842 11:52:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:36.842 11:52:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:36.842 11:52:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:36.842 11:52:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:36.842 11:52:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:36.842 11:52:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:36.842 11:52:42 -- setup/common.sh@18 -- # local node=0 00:08:36.842 11:52:42 -- setup/common.sh@19 -- # local var val 00:08:36.842 11:52:42 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.842 11:52:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.842 11:52:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:36.842 11:52:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:36.842 11:52:42 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.842 11:52:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6968228 kB' 'MemUsed: 5270888 kB' 'SwapCached: 0 kB' 'Active: 456560 kB' 'Inactive: 2370472 kB' 'Active(anon): 128700 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 2708824 kB' 'Mapped: 50772 kB' 'AnonPages: 119808 kB' 'Shmem: 10488 kB' 'KernelStack: 6752 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80104 kB' 'Slab: 179080 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.842 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.842 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # continue 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.843 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.843 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.843 11:52:42 -- setup/common.sh@33 -- # echo 0 00:08:36.843 11:52:42 -- setup/common.sh@33 -- # return 0 00:08:36.843 11:52:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:36.843 11:52:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:36.843 11:52:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:36.843 11:52:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:36.843 node0=1024 expecting 1024 00:08:36.843 11:52:42 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:36.843 11:52:42 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:36.843 00:08:36.843 real 0m0.589s 00:08:36.843 user 0m0.304s 00:08:36.843 sys 0m0.318s 00:08:36.843 11:52:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.843 11:52:42 -- common/autotest_common.sh@10 -- # set +x 00:08:36.843 ************************************ 00:08:36.843 END TEST even_2G_alloc 00:08:36.843 ************************************ 00:08:37.102 11:52:42 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:08:37.102 11:52:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:37.102 11:52:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.102 11:52:42 -- common/autotest_common.sh@10 -- # set +x 00:08:37.102 ************************************ 00:08:37.102 START TEST odd_alloc 00:08:37.102 ************************************ 00:08:37.102 11:52:42 -- common/autotest_common.sh@1114 -- # odd_alloc 00:08:37.102 11:52:42 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:08:37.102 11:52:42 -- setup/hugepages.sh@49 -- # local size=2098176 00:08:37.102 11:52:42 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:37.102 11:52:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:37.102 11:52:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:08:37.102 11:52:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:37.102 11:52:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:37.102 11:52:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:37.102 11:52:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:08:37.102 11:52:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:37.102 11:52:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:37.102 11:52:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:37.102 11:52:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:37.102 11:52:42 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:37.102 11:52:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:37.102 11:52:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:08:37.102 11:52:42 -- setup/hugepages.sh@83 -- # : 0 00:08:37.102 11:52:42 -- setup/hugepages.sh@84 -- # : 0 00:08:37.102 11:52:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:37.102 11:52:42 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:08:37.102 11:52:42 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:08:37.102 11:52:42 -- setup/hugepages.sh@160 -- # setup output 00:08:37.102 11:52:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:37.102 11:52:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:37.363 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:37.363 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:37.363 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:37.363 11:52:42 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:08:37.363 11:52:42 -- setup/hugepages.sh@89 -- # local node 00:08:37.363 11:52:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:37.363 11:52:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:37.363 11:52:42 -- setup/hugepages.sh@92 -- # local surp 00:08:37.363 11:52:42 -- setup/hugepages.sh@93 -- # local resv 00:08:37.363 11:52:42 -- setup/hugepages.sh@94 -- # local anon 00:08:37.363 11:52:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:37.363 11:52:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:37.363 11:52:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:37.363 11:52:42 -- setup/common.sh@18 -- # local node= 00:08:37.363 11:52:42 -- setup/common.sh@19 -- # local var val 00:08:37.363 11:52:42 -- setup/common.sh@20 -- # local mem_f mem 00:08:37.363 11:52:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:37.363 11:52:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:37.363 11:52:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:37.363 11:52:42 -- setup/common.sh@28 -- # mapfile -t mem 00:08:37.363 11:52:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:37.363 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.363 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.363 11:52:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6963512 kB' 'MemAvailable: 9464952 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 457048 kB' 'Inactive: 2370476 kB' 'Active(anon): 129188 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120092 kB' 'Mapped: 50952 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179068 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98964 kB' 'KernelStack: 6776 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:37.363 11:52:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.363 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.363 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.363 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.363 11:52:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.363 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.363 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.363 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.363 11:52:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.363 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.363 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.363 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.363 11:52:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.363 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.363 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.363 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.363 11:52:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.363 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.363 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.364 11:52:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.364 11:52:42 -- setup/common.sh@33 -- # echo 0 00:08:37.364 11:52:42 -- setup/common.sh@33 -- # return 0 00:08:37.364 11:52:42 -- setup/hugepages.sh@97 -- # anon=0 00:08:37.364 11:52:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:37.364 11:52:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:37.364 11:52:42 -- setup/common.sh@18 -- # local node= 00:08:37.364 11:52:42 -- setup/common.sh@19 -- # local var val 00:08:37.364 11:52:42 -- setup/common.sh@20 -- # local mem_f mem 00:08:37.364 11:52:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:37.364 11:52:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:37.364 11:52:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:37.364 11:52:42 -- setup/common.sh@28 -- # mapfile -t mem 00:08:37.364 11:52:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.364 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6963512 kB' 'MemAvailable: 9464952 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456820 kB' 'Inactive: 2370476 kB' 'Active(anon): 128960 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120088 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179064 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98960 kB' 'KernelStack: 6768 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.365 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.365 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.366 11:52:42 -- setup/common.sh@33 -- # echo 0 00:08:37.366 11:52:42 -- setup/common.sh@33 -- # return 0 00:08:37.366 11:52:42 -- setup/hugepages.sh@99 -- # surp=0 00:08:37.366 11:52:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:37.366 11:52:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:37.366 11:52:42 -- setup/common.sh@18 -- # local node= 00:08:37.366 11:52:42 -- setup/common.sh@19 -- # local var val 00:08:37.366 11:52:42 -- setup/common.sh@20 -- # local mem_f mem 00:08:37.366 11:52:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:37.366 11:52:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:37.366 11:52:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:37.366 11:52:42 -- setup/common.sh@28 -- # mapfile -t mem 00:08:37.366 11:52:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6963512 kB' 'MemAvailable: 9464952 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456796 kB' 'Inactive: 2370476 kB' 'Active(anon): 128936 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120052 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179056 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98952 kB' 'KernelStack: 6752 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.366 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.366 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.367 11:52:42 -- setup/common.sh@33 -- # echo 0 00:08:37.367 11:52:42 -- setup/common.sh@33 -- # return 0 00:08:37.367 11:52:42 -- setup/hugepages.sh@100 -- # resv=0 00:08:37.367 nr_hugepages=1025 00:08:37.367 11:52:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:08:37.367 resv_hugepages=0 00:08:37.367 11:52:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:37.367 surplus_hugepages=0 00:08:37.367 11:52:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:37.367 anon_hugepages=0 00:08:37.367 11:52:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:37.367 11:52:42 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:37.367 11:52:42 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:08:37.367 11:52:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:37.367 11:52:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:37.367 11:52:42 -- setup/common.sh@18 -- # local node= 00:08:37.367 11:52:42 -- setup/common.sh@19 -- # local var val 00:08:37.367 11:52:42 -- setup/common.sh@20 -- # local mem_f mem 00:08:37.367 11:52:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:37.367 11:52:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:37.367 11:52:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:37.367 11:52:42 -- setup/common.sh@28 -- # mapfile -t mem 00:08:37.367 11:52:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.367 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.367 11:52:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6963512 kB' 'MemAvailable: 9464952 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456596 kB' 'Inactive: 2370476 kB' 'Active(anon): 128736 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119844 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179048 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98944 kB' 'KernelStack: 6752 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 323908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.367 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.368 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.368 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.368 11:52:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.368 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.368 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.368 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.368 11:52:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.368 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.368 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.368 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.368 11:52:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.368 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.368 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.368 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.368 11:52:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.368 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.368 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.368 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.368 11:52:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.368 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.368 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.368 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.368 11:52:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.368 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.368 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.368 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.368 11:52:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.368 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.628 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.628 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.628 11:52:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.628 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.629 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.629 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.630 11:52:42 -- setup/common.sh@33 -- # echo 1025 00:08:37.630 11:52:42 -- setup/common.sh@33 -- # return 0 00:08:37.630 11:52:42 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:37.630 11:52:42 -- setup/hugepages.sh@112 -- # get_nodes 00:08:37.630 11:52:42 -- setup/hugepages.sh@27 -- # local node 00:08:37.630 11:52:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:37.630 11:52:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:08:37.630 11:52:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:37.630 11:52:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:37.630 11:52:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:37.630 11:52:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:37.630 11:52:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:37.630 11:52:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:37.630 11:52:42 -- setup/common.sh@18 -- # local node=0 00:08:37.630 11:52:42 -- setup/common.sh@19 -- # local var val 00:08:37.630 11:52:42 -- setup/common.sh@20 -- # local mem_f mem 00:08:37.630 11:52:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:37.630 11:52:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:37.630 11:52:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:37.630 11:52:42 -- setup/common.sh@28 -- # mapfile -t mem 00:08:37.630 11:52:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6963764 kB' 'MemUsed: 5275352 kB' 'SwapCached: 0 kB' 'Active: 456388 kB' 'Inactive: 2370476 kB' 'Active(anon): 128528 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708824 kB' 'Mapped: 50772 kB' 'AnonPages: 119656 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80104 kB' 'Slab: 179044 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.630 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.630 11:52:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # continue 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.631 11:52:42 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.631 11:52:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.631 11:52:42 -- setup/common.sh@33 -- # echo 0 00:08:37.631 11:52:42 -- setup/common.sh@33 -- # return 0 00:08:37.631 11:52:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:37.631 11:52:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:37.631 11:52:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:37.631 11:52:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:37.631 node0=1025 expecting 1025 00:08:37.631 11:52:42 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:08:37.631 11:52:42 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:08:37.631 00:08:37.631 real 0m0.555s 00:08:37.631 user 0m0.265s 00:08:37.631 sys 0m0.320s 00:08:37.631 11:52:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.631 11:52:42 -- common/autotest_common.sh@10 -- # set +x 00:08:37.631 ************************************ 00:08:37.631 END TEST odd_alloc 00:08:37.631 ************************************ 00:08:37.631 11:52:42 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:08:37.631 11:52:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:37.631 11:52:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.631 11:52:42 -- common/autotest_common.sh@10 -- # set +x 00:08:37.631 ************************************ 00:08:37.631 START TEST custom_alloc 00:08:37.631 ************************************ 00:08:37.631 11:52:42 -- common/autotest_common.sh@1114 -- # custom_alloc 00:08:37.631 11:52:42 -- setup/hugepages.sh@167 -- # local IFS=, 00:08:37.631 11:52:42 -- setup/hugepages.sh@169 -- # local node 00:08:37.631 11:52:42 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:08:37.631 11:52:42 -- setup/hugepages.sh@170 -- # local nodes_hp 00:08:37.631 11:52:42 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:08:37.631 11:52:42 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:08:37.631 11:52:42 -- setup/hugepages.sh@49 -- # local size=1048576 00:08:37.631 11:52:42 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:37.631 11:52:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:37.631 11:52:42 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:37.631 11:52:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:37.631 11:52:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:37.631 11:52:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:37.631 11:52:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:37.631 11:52:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:37.631 11:52:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:37.631 11:52:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:37.631 11:52:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:37.631 11:52:42 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:37.631 11:52:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:37.631 11:52:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:08:37.631 11:52:42 -- setup/hugepages.sh@83 -- # : 0 00:08:37.631 11:52:42 -- setup/hugepages.sh@84 -- # : 0 00:08:37.631 11:52:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:37.631 11:52:42 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:08:37.631 11:52:42 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:08:37.631 11:52:42 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:08:37.631 11:52:42 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:08:37.631 11:52:42 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:08:37.631 11:52:42 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:08:37.631 11:52:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:37.631 11:52:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:37.631 11:52:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:37.631 11:52:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:37.631 11:52:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:37.631 11:52:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:37.631 11:52:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:37.631 11:52:42 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:08:37.631 11:52:42 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:08:37.631 11:52:42 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:08:37.631 11:52:42 -- setup/hugepages.sh@78 -- # return 0 00:08:37.631 11:52:42 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:08:37.631 11:52:42 -- setup/hugepages.sh@187 -- # setup output 00:08:37.631 11:52:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:37.631 11:52:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:37.963 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:37.963 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:37.963 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:37.963 11:52:43 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:08:37.963 11:52:43 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:08:37.963 11:52:43 -- setup/hugepages.sh@89 -- # local node 00:08:37.963 11:52:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:37.963 11:52:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:37.963 11:52:43 -- setup/hugepages.sh@92 -- # local surp 00:08:37.963 11:52:43 -- setup/hugepages.sh@93 -- # local resv 00:08:37.963 11:52:43 -- setup/hugepages.sh@94 -- # local anon 00:08:37.963 11:52:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:37.963 11:52:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:37.963 11:52:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:37.963 11:52:43 -- setup/common.sh@18 -- # local node= 00:08:37.963 11:52:43 -- setup/common.sh@19 -- # local var val 00:08:37.963 11:52:43 -- setup/common.sh@20 -- # local mem_f mem 00:08:37.963 11:52:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:37.963 11:52:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:37.963 11:52:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:37.963 11:52:43 -- setup/common.sh@28 -- # mapfile -t mem 00:08:37.963 11:52:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:37.963 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.963 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8015428 kB' 'MemAvailable: 10516868 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 457008 kB' 'Inactive: 2370476 kB' 'Active(anon): 129148 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120244 kB' 'Mapped: 50892 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179056 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98952 kB' 'KernelStack: 6776 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 324040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.964 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.964 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.965 11:52:43 -- setup/common.sh@33 -- # echo 0 00:08:37.965 11:52:43 -- setup/common.sh@33 -- # return 0 00:08:37.965 11:52:43 -- setup/hugepages.sh@97 -- # anon=0 00:08:37.965 11:52:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:37.965 11:52:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:37.965 11:52:43 -- setup/common.sh@18 -- # local node= 00:08:37.965 11:52:43 -- setup/common.sh@19 -- # local var val 00:08:37.965 11:52:43 -- setup/common.sh@20 -- # local mem_f mem 00:08:37.965 11:52:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:37.965 11:52:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:37.965 11:52:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:37.965 11:52:43 -- setup/common.sh@28 -- # mapfile -t mem 00:08:37.965 11:52:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8015428 kB' 'MemAvailable: 10516868 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456616 kB' 'Inactive: 2370476 kB' 'Active(anon): 128756 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119844 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179080 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98976 kB' 'KernelStack: 6752 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 324040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.965 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.965 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.966 11:52:43 -- setup/common.sh@33 -- # echo 0 00:08:37.966 11:52:43 -- setup/common.sh@33 -- # return 0 00:08:37.966 11:52:43 -- setup/hugepages.sh@99 -- # surp=0 00:08:37.966 11:52:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:37.966 11:52:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:37.966 11:52:43 -- setup/common.sh@18 -- # local node= 00:08:37.966 11:52:43 -- setup/common.sh@19 -- # local var val 00:08:37.966 11:52:43 -- setup/common.sh@20 -- # local mem_f mem 00:08:37.966 11:52:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:37.966 11:52:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:37.966 11:52:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:37.966 11:52:43 -- setup/common.sh@28 -- # mapfile -t mem 00:08:37.966 11:52:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8015428 kB' 'MemAvailable: 10516868 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456596 kB' 'Inactive: 2370476 kB' 'Active(anon): 128736 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119824 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179072 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98968 kB' 'KernelStack: 6736 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 324040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.966 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.966 11:52:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.967 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.967 11:52:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.967 11:52:43 -- setup/common.sh@33 -- # echo 0 00:08:37.967 11:52:43 -- setup/common.sh@33 -- # return 0 00:08:37.967 11:52:43 -- setup/hugepages.sh@100 -- # resv=0 00:08:37.967 nr_hugepages=512 00:08:37.967 11:52:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:08:37.967 resv_hugepages=0 00:08:37.967 11:52:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:37.967 surplus_hugepages=0 00:08:37.967 11:52:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:37.967 anon_hugepages=0 00:08:37.967 11:52:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:37.967 11:52:43 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:37.967 11:52:43 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:08:37.967 11:52:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:37.967 11:52:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:37.968 11:52:43 -- setup/common.sh@18 -- # local node= 00:08:37.968 11:52:43 -- setup/common.sh@19 -- # local var val 00:08:37.968 11:52:43 -- setup/common.sh@20 -- # local mem_f mem 00:08:37.968 11:52:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:37.968 11:52:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:37.968 11:52:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:37.968 11:52:43 -- setup/common.sh@28 -- # mapfile -t mem 00:08:37.968 11:52:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:37.968 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.968 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.968 11:52:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8016004 kB' 'MemAvailable: 10517444 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456336 kB' 'Inactive: 2370476 kB' 'Active(anon): 128476 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119824 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179072 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98968 kB' 'KernelStack: 6736 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 324040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:37.968 11:52:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.968 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.968 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.968 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.968 11:52:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.968 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.968 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.968 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.968 11:52:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.968 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.968 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.968 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.968 11:52:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.968 11:52:43 -- setup/common.sh@32 -- # continue 00:08:37.968 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.228 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.228 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.229 11:52:43 -- setup/common.sh@33 -- # echo 512 00:08:38.229 11:52:43 -- setup/common.sh@33 -- # return 0 00:08:38.229 11:52:43 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:38.229 11:52:43 -- setup/hugepages.sh@112 -- # get_nodes 00:08:38.229 11:52:43 -- setup/hugepages.sh@27 -- # local node 00:08:38.229 11:52:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:38.229 11:52:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:38.229 11:52:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:38.229 11:52:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:38.229 11:52:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:38.229 11:52:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:38.229 11:52:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:38.229 11:52:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:38.229 11:52:43 -- setup/common.sh@18 -- # local node=0 00:08:38.229 11:52:43 -- setup/common.sh@19 -- # local var val 00:08:38.229 11:52:43 -- setup/common.sh@20 -- # local mem_f mem 00:08:38.229 11:52:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:38.229 11:52:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:38.229 11:52:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:38.229 11:52:43 -- setup/common.sh@28 -- # mapfile -t mem 00:08:38.229 11:52:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 8016160 kB' 'MemUsed: 4222956 kB' 'SwapCached: 0 kB' 'Active: 456768 kB' 'Inactive: 2370476 kB' 'Active(anon): 128908 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708824 kB' 'Mapped: 50772 kB' 'AnonPages: 119996 kB' 'Shmem: 10488 kB' 'KernelStack: 6704 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80104 kB' 'Slab: 179072 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.229 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.229 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.230 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.230 11:52:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.230 11:52:43 -- setup/common.sh@33 -- # echo 0 00:08:38.230 11:52:43 -- setup/common.sh@33 -- # return 0 00:08:38.230 11:52:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:38.230 11:52:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:38.230 11:52:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:38.230 11:52:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:38.230 node0=512 expecting 512 00:08:38.230 11:52:43 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:38.230 11:52:43 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:38.230 00:08:38.230 real 0m0.546s 00:08:38.230 user 0m0.253s 00:08:38.230 sys 0m0.325s 00:08:38.230 11:52:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:38.230 11:52:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.230 ************************************ 00:08:38.230 END TEST custom_alloc 00:08:38.230 ************************************ 00:08:38.230 11:52:43 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:08:38.230 11:52:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:38.230 11:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.230 11:52:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.230 ************************************ 00:08:38.230 START TEST no_shrink_alloc 00:08:38.230 ************************************ 00:08:38.230 11:52:43 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:08:38.230 11:52:43 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:08:38.230 11:52:43 -- setup/hugepages.sh@49 -- # local size=2097152 00:08:38.230 11:52:43 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:38.230 11:52:43 -- setup/hugepages.sh@51 -- # shift 00:08:38.230 11:52:43 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:38.230 11:52:43 -- setup/hugepages.sh@52 -- # local node_ids 00:08:38.230 11:52:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:38.230 11:52:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:38.230 11:52:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:38.230 11:52:43 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:38.230 11:52:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:38.230 11:52:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:38.230 11:52:43 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:38.230 11:52:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:38.230 11:52:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:38.230 11:52:43 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:38.230 11:52:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:38.231 11:52:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:38.231 11:52:43 -- setup/hugepages.sh@73 -- # return 0 00:08:38.231 11:52:43 -- setup/hugepages.sh@198 -- # setup output 00:08:38.231 11:52:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:38.231 11:52:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:38.491 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:38.491 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:38.491 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:38.491 11:52:43 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:08:38.491 11:52:43 -- setup/hugepages.sh@89 -- # local node 00:08:38.491 11:52:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:38.491 11:52:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:38.491 11:52:43 -- setup/hugepages.sh@92 -- # local surp 00:08:38.491 11:52:43 -- setup/hugepages.sh@93 -- # local resv 00:08:38.491 11:52:43 -- setup/hugepages.sh@94 -- # local anon 00:08:38.491 11:52:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:38.491 11:52:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:38.491 11:52:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:38.491 11:52:43 -- setup/common.sh@18 -- # local node= 00:08:38.491 11:52:43 -- setup/common.sh@19 -- # local var val 00:08:38.491 11:52:43 -- setup/common.sh@20 -- # local mem_f mem 00:08:38.491 11:52:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:38.491 11:52:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:38.491 11:52:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:38.491 11:52:43 -- setup/common.sh@28 -- # mapfile -t mem 00:08:38.491 11:52:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.491 11:52:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6969372 kB' 'MemAvailable: 9470812 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456836 kB' 'Inactive: 2370476 kB' 'Active(anon): 128976 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120056 kB' 'Mapped: 50840 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179068 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98964 kB' 'KernelStack: 6744 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.491 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.491 11:52:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.492 11:52:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:38.492 11:52:43 -- setup/common.sh@33 -- # echo 0 00:08:38.492 11:52:43 -- setup/common.sh@33 -- # return 0 00:08:38.492 11:52:43 -- setup/hugepages.sh@97 -- # anon=0 00:08:38.492 11:52:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:38.492 11:52:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:38.492 11:52:43 -- setup/common.sh@18 -- # local node= 00:08:38.492 11:52:43 -- setup/common.sh@19 -- # local var val 00:08:38.492 11:52:43 -- setup/common.sh@20 -- # local mem_f mem 00:08:38.492 11:52:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:38.492 11:52:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:38.492 11:52:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:38.492 11:52:43 -- setup/common.sh@28 -- # mapfile -t mem 00:08:38.492 11:52:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.492 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6969372 kB' 'MemAvailable: 9470812 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456484 kB' 'Inactive: 2370476 kB' 'Active(anon): 128624 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119752 kB' 'Mapped: 50896 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179072 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98968 kB' 'KernelStack: 6744 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.493 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.493 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.494 11:52:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.494 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.494 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.494 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.494 11:52:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.756 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.756 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.756 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.756 11:52:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.756 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.756 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.756 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.756 11:52:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.756 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.756 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.756 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.756 11:52:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.756 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.756 11:52:43 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.756 11:52:43 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.756 11:52:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.756 11:52:43 -- setup/common.sh@32 -- # continue 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.756 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.756 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.756 11:52:44 -- setup/common.sh@33 -- # echo 0 00:08:38.757 11:52:44 -- setup/common.sh@33 -- # return 0 00:08:38.757 11:52:44 -- setup/hugepages.sh@99 -- # surp=0 00:08:38.757 11:52:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:38.757 11:52:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:38.757 11:52:44 -- setup/common.sh@18 -- # local node= 00:08:38.757 11:52:44 -- setup/common.sh@19 -- # local var val 00:08:38.757 11:52:44 -- setup/common.sh@20 -- # local mem_f mem 00:08:38.757 11:52:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:38.757 11:52:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:38.757 11:52:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:38.757 11:52:44 -- setup/common.sh@28 -- # mapfile -t mem 00:08:38.757 11:52:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6969372 kB' 'MemAvailable: 9470812 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456396 kB' 'Inactive: 2370476 kB' 'Active(anon): 128536 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119928 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179068 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98964 kB' 'KernelStack: 6768 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.757 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.757 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:38.758 11:52:44 -- setup/common.sh@33 -- # echo 0 00:08:38.758 11:52:44 -- setup/common.sh@33 -- # return 0 00:08:38.758 nr_hugepages=1024 00:08:38.758 resv_hugepages=0 00:08:38.758 surplus_hugepages=0 00:08:38.758 11:52:44 -- setup/hugepages.sh@100 -- # resv=0 00:08:38.758 11:52:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:38.758 11:52:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:38.758 11:52:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:38.758 anon_hugepages=0 00:08:38.758 11:52:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:38.758 11:52:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:38.758 11:52:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:38.758 11:52:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:38.758 11:52:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:38.758 11:52:44 -- setup/common.sh@18 -- # local node= 00:08:38.758 11:52:44 -- setup/common.sh@19 -- # local var val 00:08:38.758 11:52:44 -- setup/common.sh@20 -- # local mem_f mem 00:08:38.758 11:52:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:38.758 11:52:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:38.758 11:52:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:38.758 11:52:44 -- setup/common.sh@28 -- # mapfile -t mem 00:08:38.758 11:52:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6969372 kB' 'MemAvailable: 9470812 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456436 kB' 'Inactive: 2370476 kB' 'Active(anon): 128576 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119940 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179060 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98956 kB' 'KernelStack: 6768 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.758 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.758 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.759 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.759 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:38.759 11:52:44 -- setup/common.sh@33 -- # echo 1024 00:08:38.759 11:52:44 -- setup/common.sh@33 -- # return 0 00:08:38.759 11:52:44 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:38.759 11:52:44 -- setup/hugepages.sh@112 -- # get_nodes 00:08:38.759 11:52:44 -- setup/hugepages.sh@27 -- # local node 00:08:38.759 11:52:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:38.759 11:52:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:38.759 11:52:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:38.759 11:52:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:38.759 11:52:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:38.759 11:52:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:38.759 11:52:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:38.759 11:52:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:38.759 11:52:44 -- setup/common.sh@18 -- # local node=0 00:08:38.759 11:52:44 -- setup/common.sh@19 -- # local var val 00:08:38.759 11:52:44 -- setup/common.sh@20 -- # local mem_f mem 00:08:38.759 11:52:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:38.759 11:52:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:38.759 11:52:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:38.760 11:52:44 -- setup/common.sh@28 -- # mapfile -t mem 00:08:38.760 11:52:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6969372 kB' 'MemUsed: 5269744 kB' 'SwapCached: 0 kB' 'Active: 456620 kB' 'Inactive: 2370476 kB' 'Active(anon): 128760 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708824 kB' 'Mapped: 50772 kB' 'AnonPages: 119844 kB' 'Shmem: 10488 kB' 'KernelStack: 6752 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80104 kB' 'Slab: 179056 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # continue 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:38.760 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:38.760 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:38.760 11:52:44 -- setup/common.sh@33 -- # echo 0 00:08:38.760 11:52:44 -- setup/common.sh@33 -- # return 0 00:08:38.760 11:52:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:38.761 11:52:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:38.761 11:52:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:38.761 11:52:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:38.761 11:52:44 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:38.761 node0=1024 expecting 1024 00:08:38.761 11:52:44 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:38.761 11:52:44 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:08:38.761 11:52:44 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:08:38.761 11:52:44 -- setup/hugepages.sh@202 -- # setup output 00:08:38.761 11:52:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:38.761 11:52:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:39.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:39.020 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:39.020 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:39.283 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:08:39.283 11:52:44 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:08:39.283 11:52:44 -- setup/hugepages.sh@89 -- # local node 00:08:39.283 11:52:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:39.283 11:52:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:39.283 11:52:44 -- setup/hugepages.sh@92 -- # local surp 00:08:39.283 11:52:44 -- setup/hugepages.sh@93 -- # local resv 00:08:39.283 11:52:44 -- setup/hugepages.sh@94 -- # local anon 00:08:39.283 11:52:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:39.283 11:52:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:39.283 11:52:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:39.283 11:52:44 -- setup/common.sh@18 -- # local node= 00:08:39.283 11:52:44 -- setup/common.sh@19 -- # local var val 00:08:39.283 11:52:44 -- setup/common.sh@20 -- # local mem_f mem 00:08:39.283 11:52:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.283 11:52:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:39.283 11:52:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:39.283 11:52:44 -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.283 11:52:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.283 11:52:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6967628 kB' 'MemAvailable: 9469068 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 457276 kB' 'Inactive: 2370476 kB' 'Active(anon): 129416 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120564 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179092 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98988 kB' 'KernelStack: 6856 kB' 'PageTables: 4592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.283 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.283 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:39.284 11:52:44 -- setup/common.sh@33 -- # echo 0 00:08:39.284 11:52:44 -- setup/common.sh@33 -- # return 0 00:08:39.284 11:52:44 -- setup/hugepages.sh@97 -- # anon=0 00:08:39.284 11:52:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:39.284 11:52:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:39.284 11:52:44 -- setup/common.sh@18 -- # local node= 00:08:39.284 11:52:44 -- setup/common.sh@19 -- # local var val 00:08:39.284 11:52:44 -- setup/common.sh@20 -- # local mem_f mem 00:08:39.284 11:52:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.284 11:52:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:39.284 11:52:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:39.284 11:52:44 -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.284 11:52:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6967376 kB' 'MemAvailable: 9468816 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456652 kB' 'Inactive: 2370476 kB' 'Active(anon): 128792 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119840 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179112 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 99008 kB' 'KernelStack: 6736 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.284 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.284 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.285 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.285 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.286 11:52:44 -- setup/common.sh@33 -- # echo 0 00:08:39.286 11:52:44 -- setup/common.sh@33 -- # return 0 00:08:39.286 11:52:44 -- setup/hugepages.sh@99 -- # surp=0 00:08:39.286 11:52:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:39.286 11:52:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:39.286 11:52:44 -- setup/common.sh@18 -- # local node= 00:08:39.286 11:52:44 -- setup/common.sh@19 -- # local var val 00:08:39.286 11:52:44 -- setup/common.sh@20 -- # local mem_f mem 00:08:39.286 11:52:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.286 11:52:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:39.286 11:52:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:39.286 11:52:44 -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.286 11:52:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6967376 kB' 'MemAvailable: 9468816 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456664 kB' 'Inactive: 2370476 kB' 'Active(anon): 128804 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119884 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179108 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 99004 kB' 'KernelStack: 6752 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.286 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.286 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.287 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:39.287 11:52:44 -- setup/common.sh@33 -- # echo 0 00:08:39.287 11:52:44 -- setup/common.sh@33 -- # return 0 00:08:39.287 11:52:44 -- setup/hugepages.sh@100 -- # resv=0 00:08:39.287 nr_hugepages=1024 00:08:39.287 resv_hugepages=0 00:08:39.287 surplus_hugepages=0 00:08:39.287 anon_hugepages=0 00:08:39.287 11:52:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:39.287 11:52:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:39.287 11:52:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:39.287 11:52:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:39.287 11:52:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:39.287 11:52:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:39.287 11:52:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:39.287 11:52:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:39.287 11:52:44 -- setup/common.sh@18 -- # local node= 00:08:39.287 11:52:44 -- setup/common.sh@19 -- # local var val 00:08:39.287 11:52:44 -- setup/common.sh@20 -- # local mem_f mem 00:08:39.287 11:52:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.287 11:52:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:39.287 11:52:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:39.287 11:52:44 -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.287 11:52:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.287 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6967376 kB' 'MemAvailable: 9468816 kB' 'Buffers: 2684 kB' 'Cached: 2706140 kB' 'SwapCached: 0 kB' 'Active: 456768 kB' 'Inactive: 2370476 kB' 'Active(anon): 128908 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120032 kB' 'Mapped: 50772 kB' 'Shmem: 10488 kB' 'KReclaimable: 80104 kB' 'Slab: 179100 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98996 kB' 'KernelStack: 6752 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 324240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.288 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.288 11:52:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:39.289 11:52:44 -- setup/common.sh@33 -- # echo 1024 00:08:39.289 11:52:44 -- setup/common.sh@33 -- # return 0 00:08:39.289 11:52:44 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:39.289 11:52:44 -- setup/hugepages.sh@112 -- # get_nodes 00:08:39.289 11:52:44 -- setup/hugepages.sh@27 -- # local node 00:08:39.289 11:52:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:39.289 11:52:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:39.289 11:52:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:39.289 11:52:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:39.289 11:52:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:39.289 11:52:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:39.289 11:52:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:39.289 11:52:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:39.289 11:52:44 -- setup/common.sh@18 -- # local node=0 00:08:39.289 11:52:44 -- setup/common.sh@19 -- # local var val 00:08:39.289 11:52:44 -- setup/common.sh@20 -- # local mem_f mem 00:08:39.289 11:52:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.289 11:52:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:39.289 11:52:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:39.289 11:52:44 -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.289 11:52:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6968416 kB' 'MemUsed: 5270700 kB' 'SwapCached: 0 kB' 'Active: 454584 kB' 'Inactive: 2370476 kB' 'Active(anon): 126724 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2370476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708824 kB' 'Mapped: 49992 kB' 'AnonPages: 118108 kB' 'Shmem: 10488 kB' 'KernelStack: 6736 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80104 kB' 'Slab: 179088 kB' 'SReclaimable: 80104 kB' 'SUnreclaim: 98984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.289 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.289 11:52:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # continue 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # IFS=': ' 00:08:39.290 11:52:44 -- setup/common.sh@31 -- # read -r var val _ 00:08:39.290 11:52:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:39.290 11:52:44 -- setup/common.sh@33 -- # echo 0 00:08:39.290 11:52:44 -- setup/common.sh@33 -- # return 0 00:08:39.290 node0=1024 expecting 1024 00:08:39.290 ************************************ 00:08:39.290 END TEST no_shrink_alloc 00:08:39.290 ************************************ 00:08:39.290 11:52:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:39.290 11:52:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:39.290 11:52:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:39.290 11:52:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:39.290 11:52:44 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:39.290 11:52:44 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:39.290 00:08:39.290 real 0m1.150s 00:08:39.290 user 0m0.561s 00:08:39.290 sys 0m0.602s 00:08:39.290 11:52:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:39.290 11:52:44 -- common/autotest_common.sh@10 -- # set +x 00:08:39.290 11:52:44 -- setup/hugepages.sh@217 -- # clear_hp 00:08:39.290 11:52:44 -- setup/hugepages.sh@37 -- # local node hp 00:08:39.290 11:52:44 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:39.290 11:52:44 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:39.290 11:52:44 -- setup/hugepages.sh@41 -- # echo 0 00:08:39.290 11:52:44 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:39.290 11:52:44 -- setup/hugepages.sh@41 -- # echo 0 00:08:39.290 11:52:44 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:08:39.290 11:52:44 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:08:39.290 ************************************ 00:08:39.290 END TEST hugepages 00:08:39.290 ************************************ 00:08:39.290 00:08:39.290 real 0m5.075s 00:08:39.290 user 0m2.394s 00:08:39.290 sys 0m2.704s 00:08:39.290 11:52:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:39.290 11:52:44 -- common/autotest_common.sh@10 -- # set +x 00:08:39.550 11:52:44 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:08:39.550 11:52:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:39.550 11:52:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:39.550 11:52:44 -- common/autotest_common.sh@10 -- # set +x 00:08:39.550 ************************************ 00:08:39.550 START TEST driver 00:08:39.550 ************************************ 00:08:39.550 11:52:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:08:39.550 * Looking for test storage... 00:08:39.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:39.550 11:52:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:39.550 11:52:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:39.550 11:52:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:39.550 11:52:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:39.550 11:52:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:39.550 11:52:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:39.550 11:52:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:39.550 11:52:44 -- scripts/common.sh@335 -- # IFS=.-: 00:08:39.550 11:52:44 -- scripts/common.sh@335 -- # read -ra ver1 00:08:39.550 11:52:44 -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.550 11:52:44 -- scripts/common.sh@336 -- # read -ra ver2 00:08:39.550 11:52:44 -- scripts/common.sh@337 -- # local 'op=<' 00:08:39.550 11:52:44 -- scripts/common.sh@339 -- # ver1_l=2 00:08:39.550 11:52:44 -- scripts/common.sh@340 -- # ver2_l=1 00:08:39.550 11:52:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:39.550 11:52:44 -- scripts/common.sh@343 -- # case "$op" in 00:08:39.550 11:52:44 -- scripts/common.sh@344 -- # : 1 00:08:39.550 11:52:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:39.550 11:52:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.550 11:52:45 -- scripts/common.sh@364 -- # decimal 1 00:08:39.550 11:52:45 -- scripts/common.sh@352 -- # local d=1 00:08:39.550 11:52:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.550 11:52:45 -- scripts/common.sh@354 -- # echo 1 00:08:39.550 11:52:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:39.550 11:52:45 -- scripts/common.sh@365 -- # decimal 2 00:08:39.550 11:52:45 -- scripts/common.sh@352 -- # local d=2 00:08:39.550 11:52:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.550 11:52:45 -- scripts/common.sh@354 -- # echo 2 00:08:39.550 11:52:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:39.550 11:52:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:39.550 11:52:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:39.550 11:52:45 -- scripts/common.sh@367 -- # return 0 00:08:39.550 11:52:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.550 11:52:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.550 --rc genhtml_branch_coverage=1 00:08:39.550 --rc genhtml_function_coverage=1 00:08:39.550 --rc genhtml_legend=1 00:08:39.550 --rc geninfo_all_blocks=1 00:08:39.550 --rc geninfo_unexecuted_blocks=1 00:08:39.550 00:08:39.550 ' 00:08:39.550 11:52:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.550 --rc genhtml_branch_coverage=1 00:08:39.550 --rc genhtml_function_coverage=1 00:08:39.550 --rc genhtml_legend=1 00:08:39.550 --rc geninfo_all_blocks=1 00:08:39.550 --rc geninfo_unexecuted_blocks=1 00:08:39.550 00:08:39.550 ' 00:08:39.550 11:52:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.550 --rc genhtml_branch_coverage=1 00:08:39.550 --rc genhtml_function_coverage=1 00:08:39.550 --rc genhtml_legend=1 00:08:39.550 --rc geninfo_all_blocks=1 00:08:39.550 --rc geninfo_unexecuted_blocks=1 00:08:39.550 00:08:39.550 ' 00:08:39.550 11:52:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.550 --rc genhtml_branch_coverage=1 00:08:39.550 --rc genhtml_function_coverage=1 00:08:39.550 --rc genhtml_legend=1 00:08:39.550 --rc geninfo_all_blocks=1 00:08:39.550 --rc geninfo_unexecuted_blocks=1 00:08:39.550 00:08:39.550 ' 00:08:39.550 11:52:45 -- setup/driver.sh@68 -- # setup reset 00:08:39.550 11:52:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:39.550 11:52:45 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:40.117 11:52:45 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:08:40.117 11:52:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:40.117 11:52:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.117 11:52:45 -- common/autotest_common.sh@10 -- # set +x 00:08:40.117 ************************************ 00:08:40.117 START TEST guess_driver 00:08:40.117 ************************************ 00:08:40.117 11:52:45 -- common/autotest_common.sh@1114 -- # guess_driver 00:08:40.117 11:52:45 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:08:40.117 11:52:45 -- setup/driver.sh@47 -- # local fail=0 00:08:40.117 11:52:45 -- setup/driver.sh@49 -- # pick_driver 00:08:40.117 11:52:45 -- setup/driver.sh@36 -- # vfio 00:08:40.117 11:52:45 -- setup/driver.sh@21 -- # local iommu_grups 00:08:40.117 11:52:45 -- setup/driver.sh@22 -- # local unsafe_vfio 00:08:40.117 11:52:45 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:08:40.117 11:52:45 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:08:40.117 11:52:45 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:08:40.117 11:52:45 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:08:40.117 11:52:45 -- setup/driver.sh@32 -- # return 1 00:08:40.117 11:52:45 -- setup/driver.sh@38 -- # uio 00:08:40.117 11:52:45 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:08:40.117 11:52:45 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:08:40.117 11:52:45 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:08:40.117 11:52:45 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:08:40.117 11:52:45 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:08:40.117 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:08:40.117 11:52:45 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:08:40.117 11:52:45 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:08:40.117 Looking for driver=uio_pci_generic 00:08:40.117 11:52:45 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:08:40.117 11:52:45 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:08:40.117 11:52:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:40.117 11:52:45 -- setup/driver.sh@45 -- # setup output config 00:08:40.117 11:52:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:40.117 11:52:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:41.052 11:52:46 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:08:41.052 11:52:46 -- setup/driver.sh@58 -- # continue 00:08:41.052 11:52:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:41.052 11:52:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:08:41.052 11:52:46 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:08:41.052 11:52:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:41.052 11:52:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:08:41.052 11:52:46 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:08:41.052 11:52:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:41.052 11:52:46 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:08:41.052 11:52:46 -- setup/driver.sh@65 -- # setup reset 00:08:41.052 11:52:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:41.052 11:52:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:41.620 00:08:41.620 real 0m1.471s 00:08:41.620 user 0m0.583s 00:08:41.620 sys 0m0.870s 00:08:41.620 11:52:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:41.620 11:52:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.620 ************************************ 00:08:41.620 END TEST guess_driver 00:08:41.620 ************************************ 00:08:41.620 ************************************ 00:08:41.620 END TEST driver 00:08:41.620 ************************************ 00:08:41.620 00:08:41.620 real 0m2.293s 00:08:41.620 user 0m0.912s 00:08:41.620 sys 0m1.427s 00:08:41.620 11:52:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:41.620 11:52:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.880 11:52:47 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:08:41.880 11:52:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:41.880 11:52:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.880 11:52:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.880 ************************************ 00:08:41.880 START TEST devices 00:08:41.880 ************************************ 00:08:41.880 11:52:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:08:41.880 * Looking for test storage... 00:08:41.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:41.880 11:52:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:41.880 11:52:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:41.880 11:52:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:41.880 11:52:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:41.880 11:52:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:41.880 11:52:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:41.880 11:52:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:41.880 11:52:47 -- scripts/common.sh@335 -- # IFS=.-: 00:08:41.880 11:52:47 -- scripts/common.sh@335 -- # read -ra ver1 00:08:41.880 11:52:47 -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.880 11:52:47 -- scripts/common.sh@336 -- # read -ra ver2 00:08:41.880 11:52:47 -- scripts/common.sh@337 -- # local 'op=<' 00:08:41.880 11:52:47 -- scripts/common.sh@339 -- # ver1_l=2 00:08:41.880 11:52:47 -- scripts/common.sh@340 -- # ver2_l=1 00:08:41.880 11:52:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:41.880 11:52:47 -- scripts/common.sh@343 -- # case "$op" in 00:08:41.880 11:52:47 -- scripts/common.sh@344 -- # : 1 00:08:41.880 11:52:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:41.880 11:52:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.880 11:52:47 -- scripts/common.sh@364 -- # decimal 1 00:08:41.880 11:52:47 -- scripts/common.sh@352 -- # local d=1 00:08:41.880 11:52:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.880 11:52:47 -- scripts/common.sh@354 -- # echo 1 00:08:41.880 11:52:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:41.880 11:52:47 -- scripts/common.sh@365 -- # decimal 2 00:08:41.880 11:52:47 -- scripts/common.sh@352 -- # local d=2 00:08:41.880 11:52:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.880 11:52:47 -- scripts/common.sh@354 -- # echo 2 00:08:41.880 11:52:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:41.880 11:52:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:41.880 11:52:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:41.880 11:52:47 -- scripts/common.sh@367 -- # return 0 00:08:41.880 11:52:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.880 11:52:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:41.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.880 --rc genhtml_branch_coverage=1 00:08:41.880 --rc genhtml_function_coverage=1 00:08:41.880 --rc genhtml_legend=1 00:08:41.880 --rc geninfo_all_blocks=1 00:08:41.880 --rc geninfo_unexecuted_blocks=1 00:08:41.880 00:08:41.880 ' 00:08:41.880 11:52:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:41.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.880 --rc genhtml_branch_coverage=1 00:08:41.880 --rc genhtml_function_coverage=1 00:08:41.880 --rc genhtml_legend=1 00:08:41.880 --rc geninfo_all_blocks=1 00:08:41.880 --rc geninfo_unexecuted_blocks=1 00:08:41.880 00:08:41.880 ' 00:08:41.880 11:52:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:41.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.880 --rc genhtml_branch_coverage=1 00:08:41.880 --rc genhtml_function_coverage=1 00:08:41.880 --rc genhtml_legend=1 00:08:41.880 --rc geninfo_all_blocks=1 00:08:41.880 --rc geninfo_unexecuted_blocks=1 00:08:41.880 00:08:41.880 ' 00:08:41.880 11:52:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:41.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.880 --rc genhtml_branch_coverage=1 00:08:41.880 --rc genhtml_function_coverage=1 00:08:41.880 --rc genhtml_legend=1 00:08:41.880 --rc geninfo_all_blocks=1 00:08:41.880 --rc geninfo_unexecuted_blocks=1 00:08:41.880 00:08:41.880 ' 00:08:41.880 11:52:47 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:08:41.880 11:52:47 -- setup/devices.sh@192 -- # setup reset 00:08:41.880 11:52:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:41.880 11:52:47 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:42.817 11:52:48 -- setup/devices.sh@194 -- # get_zoned_devs 00:08:42.817 11:52:48 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:42.817 11:52:48 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:42.817 11:52:48 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:42.817 11:52:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:42.817 11:52:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:42.817 11:52:48 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:42.817 11:52:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:42.817 11:52:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:42.817 11:52:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:42.817 11:52:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:42.817 11:52:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:42.817 11:52:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:42.817 11:52:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:42.817 11:52:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:42.817 11:52:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:42.817 11:52:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:42.817 11:52:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:42.817 11:52:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:42.817 11:52:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:42.817 11:52:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:42.817 11:52:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:42.817 11:52:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:42.817 11:52:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:42.817 11:52:48 -- setup/devices.sh@196 -- # blocks=() 00:08:42.817 11:52:48 -- setup/devices.sh@196 -- # declare -a blocks 00:08:42.817 11:52:48 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:08:42.817 11:52:48 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:08:42.817 11:52:48 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:08:42.817 11:52:48 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:42.817 11:52:48 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:08:42.817 11:52:48 -- setup/devices.sh@201 -- # ctrl=nvme0 00:08:42.817 11:52:48 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:08:42.817 11:52:48 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:08:42.817 11:52:48 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:08:42.817 11:52:48 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:08:42.817 11:52:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:08:42.817 No valid GPT data, bailing 00:08:42.817 11:52:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:42.817 11:52:48 -- scripts/common.sh@393 -- # pt= 00:08:42.817 11:52:48 -- scripts/common.sh@394 -- # return 1 00:08:42.817 11:52:48 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:08:42.817 11:52:48 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:42.817 11:52:48 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:42.817 11:52:48 -- setup/common.sh@80 -- # echo 5368709120 00:08:42.817 11:52:48 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:08:42.817 11:52:48 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:42.817 11:52:48 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:08:42.817 11:52:48 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:42.817 11:52:48 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:08:42.817 11:52:48 -- setup/devices.sh@201 -- # ctrl=nvme1 00:08:42.817 11:52:48 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:08:42.817 11:52:48 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:08:42.817 11:52:48 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:08:42.817 11:52:48 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:08:42.817 11:52:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:08:42.817 No valid GPT data, bailing 00:08:42.817 11:52:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:42.817 11:52:48 -- scripts/common.sh@393 -- # pt= 00:08:42.817 11:52:48 -- scripts/common.sh@394 -- # return 1 00:08:43.077 11:52:48 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:08:43.077 11:52:48 -- setup/common.sh@76 -- # local dev=nvme1n1 00:08:43.077 11:52:48 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:08:43.077 11:52:48 -- setup/common.sh@80 -- # echo 4294967296 00:08:43.077 11:52:48 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:43.077 11:52:48 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:43.077 11:52:48 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:08:43.077 11:52:48 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:43.077 11:52:48 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:08:43.077 11:52:48 -- setup/devices.sh@201 -- # ctrl=nvme1 00:08:43.077 11:52:48 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:08:43.077 11:52:48 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:08:43.077 11:52:48 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:08:43.077 11:52:48 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:08:43.077 11:52:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:08:43.077 No valid GPT data, bailing 00:08:43.077 11:52:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:43.077 11:52:48 -- scripts/common.sh@393 -- # pt= 00:08:43.077 11:52:48 -- scripts/common.sh@394 -- # return 1 00:08:43.077 11:52:48 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:08:43.077 11:52:48 -- setup/common.sh@76 -- # local dev=nvme1n2 00:08:43.077 11:52:48 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:08:43.077 11:52:48 -- setup/common.sh@80 -- # echo 4294967296 00:08:43.077 11:52:48 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:43.077 11:52:48 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:43.077 11:52:48 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:08:43.077 11:52:48 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:43.077 11:52:48 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:08:43.077 11:52:48 -- setup/devices.sh@201 -- # ctrl=nvme1 00:08:43.077 11:52:48 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:08:43.077 11:52:48 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:08:43.077 11:52:48 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:08:43.077 11:52:48 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:08:43.077 11:52:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:08:43.077 No valid GPT data, bailing 00:08:43.077 11:52:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:43.077 11:52:48 -- scripts/common.sh@393 -- # pt= 00:08:43.077 11:52:48 -- scripts/common.sh@394 -- # return 1 00:08:43.077 11:52:48 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:08:43.077 11:52:48 -- setup/common.sh@76 -- # local dev=nvme1n3 00:08:43.077 11:52:48 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:08:43.077 11:52:48 -- setup/common.sh@80 -- # echo 4294967296 00:08:43.077 11:52:48 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:43.077 11:52:48 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:43.077 11:52:48 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:08:43.077 11:52:48 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:08:43.077 11:52:48 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:08:43.077 11:52:48 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:08:43.077 11:52:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:43.077 11:52:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.077 11:52:48 -- common/autotest_common.sh@10 -- # set +x 00:08:43.077 ************************************ 00:08:43.077 START TEST nvme_mount 00:08:43.077 ************************************ 00:08:43.077 11:52:48 -- common/autotest_common.sh@1114 -- # nvme_mount 00:08:43.077 11:52:48 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:08:43.077 11:52:48 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:08:43.077 11:52:48 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:43.077 11:52:48 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:43.077 11:52:48 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:08:43.077 11:52:48 -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:43.077 11:52:48 -- setup/common.sh@40 -- # local part_no=1 00:08:43.077 11:52:48 -- setup/common.sh@41 -- # local size=1073741824 00:08:43.077 11:52:48 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:43.077 11:52:48 -- setup/common.sh@44 -- # parts=() 00:08:43.077 11:52:48 -- setup/common.sh@44 -- # local parts 00:08:43.077 11:52:48 -- setup/common.sh@46 -- # (( part = 1 )) 00:08:43.077 11:52:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:43.077 11:52:48 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:43.077 11:52:48 -- setup/common.sh@46 -- # (( part++ )) 00:08:43.077 11:52:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:43.077 11:52:48 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:08:43.077 11:52:48 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:43.077 11:52:48 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:08:44.070 Creating new GPT entries in memory. 00:08:44.070 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:44.070 other utilities. 00:08:44.070 11:52:49 -- setup/common.sh@57 -- # (( part = 1 )) 00:08:44.070 11:52:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:44.070 11:52:49 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:44.070 11:52:49 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:44.070 11:52:49 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:45.449 Creating new GPT entries in memory. 00:08:45.449 The operation has completed successfully. 00:08:45.449 11:52:50 -- setup/common.sh@57 -- # (( part++ )) 00:08:45.449 11:52:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:45.449 11:52:50 -- setup/common.sh@62 -- # wait 63847 00:08:45.449 11:52:50 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:45.449 11:52:50 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:08:45.449 11:52:50 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:45.449 11:52:50 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:08:45.449 11:52:50 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:08:45.449 11:52:50 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:45.449 11:52:50 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:45.449 11:52:50 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:45.449 11:52:50 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:08:45.449 11:52:50 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:45.449 11:52:50 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:45.449 11:52:50 -- setup/devices.sh@53 -- # local found=0 00:08:45.449 11:52:50 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:45.449 11:52:50 -- setup/devices.sh@56 -- # : 00:08:45.449 11:52:50 -- setup/devices.sh@59 -- # local pci status 00:08:45.449 11:52:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.449 11:52:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:45.449 11:52:50 -- setup/devices.sh@47 -- # setup output config 00:08:45.449 11:52:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:45.449 11:52:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:45.449 11:52:50 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:45.449 11:52:50 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:08:45.449 11:52:50 -- setup/devices.sh@63 -- # found=1 00:08:45.449 11:52:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.449 11:52:50 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:45.449 11:52:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.707 11:52:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:45.707 11:52:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.967 11:52:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:45.967 11:52:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.967 11:52:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:45.967 11:52:51 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:45.967 11:52:51 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:45.967 11:52:51 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:45.967 11:52:51 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:45.967 11:52:51 -- setup/devices.sh@110 -- # cleanup_nvme 00:08:45.967 11:52:51 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:45.967 11:52:51 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:45.967 11:52:51 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:45.967 11:52:51 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:45.967 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:45.967 11:52:51 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:45.967 11:52:51 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:46.236 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:46.236 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:46.236 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:46.236 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:46.236 11:52:51 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:08:46.236 11:52:51 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:08:46.236 11:52:51 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:46.236 11:52:51 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:08:46.236 11:52:51 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:08:46.236 11:52:51 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:46.236 11:52:51 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:46.236 11:52:51 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:46.236 11:52:51 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:08:46.236 11:52:51 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:46.236 11:52:51 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:46.236 11:52:51 -- setup/devices.sh@53 -- # local found=0 00:08:46.236 11:52:51 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:46.237 11:52:51 -- setup/devices.sh@56 -- # : 00:08:46.237 11:52:51 -- setup/devices.sh@59 -- # local pci status 00:08:46.237 11:52:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:46.237 11:52:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:46.237 11:52:51 -- setup/devices.sh@47 -- # setup output config 00:08:46.237 11:52:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:46.237 11:52:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:46.496 11:52:51 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:46.496 11:52:51 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:08:46.496 11:52:51 -- setup/devices.sh@63 -- # found=1 00:08:46.496 11:52:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:46.496 11:52:51 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:46.496 11:52:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:46.754 11:52:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:46.754 11:52:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:47.011 11:52:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:47.011 11:52:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:47.011 11:52:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:47.011 11:52:52 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:47.011 11:52:52 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:47.011 11:52:52 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:47.012 11:52:52 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:47.012 11:52:52 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:47.012 11:52:52 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:08:47.012 11:52:52 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:47.012 11:52:52 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:08:47.012 11:52:52 -- setup/devices.sh@50 -- # local mount_point= 00:08:47.012 11:52:52 -- setup/devices.sh@51 -- # local test_file= 00:08:47.012 11:52:52 -- setup/devices.sh@53 -- # local found=0 00:08:47.012 11:52:52 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:47.012 11:52:52 -- setup/devices.sh@59 -- # local pci status 00:08:47.012 11:52:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:47.012 11:52:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:47.012 11:52:52 -- setup/devices.sh@47 -- # setup output config 00:08:47.012 11:52:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:47.012 11:52:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:47.270 11:52:52 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:47.270 11:52:52 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:08:47.270 11:52:52 -- setup/devices.sh@63 -- # found=1 00:08:47.270 11:52:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:47.270 11:52:52 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:47.270 11:52:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:47.528 11:52:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:47.528 11:52:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:47.786 11:52:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:47.786 11:52:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:47.786 11:52:53 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:47.786 11:52:53 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:47.786 11:52:53 -- setup/devices.sh@68 -- # return 0 00:08:47.786 11:52:53 -- setup/devices.sh@128 -- # cleanup_nvme 00:08:47.786 11:52:53 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:47.786 11:52:53 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:47.786 11:52:53 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:47.786 11:52:53 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:47.786 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:47.786 00:08:47.786 real 0m4.637s 00:08:47.786 user 0m1.041s 00:08:47.786 sys 0m1.261s 00:08:47.786 11:52:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:47.786 ************************************ 00:08:47.786 END TEST nvme_mount 00:08:47.786 11:52:53 -- common/autotest_common.sh@10 -- # set +x 00:08:47.786 ************************************ 00:08:47.786 11:52:53 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:08:47.786 11:52:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:47.786 11:52:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.786 11:52:53 -- common/autotest_common.sh@10 -- # set +x 00:08:47.786 ************************************ 00:08:47.786 START TEST dm_mount 00:08:47.786 ************************************ 00:08:47.786 11:52:53 -- common/autotest_common.sh@1114 -- # dm_mount 00:08:47.786 11:52:53 -- setup/devices.sh@144 -- # pv=nvme0n1 00:08:47.786 11:52:53 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:08:47.787 11:52:53 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:08:47.787 11:52:53 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:08:47.787 11:52:53 -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:47.787 11:52:53 -- setup/common.sh@40 -- # local part_no=2 00:08:47.787 11:52:53 -- setup/common.sh@41 -- # local size=1073741824 00:08:47.787 11:52:53 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:47.787 11:52:53 -- setup/common.sh@44 -- # parts=() 00:08:47.787 11:52:53 -- setup/common.sh@44 -- # local parts 00:08:47.787 11:52:53 -- setup/common.sh@46 -- # (( part = 1 )) 00:08:47.787 11:52:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:47.787 11:52:53 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:47.787 11:52:53 -- setup/common.sh@46 -- # (( part++ )) 00:08:47.787 11:52:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:47.787 11:52:53 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:47.787 11:52:53 -- setup/common.sh@46 -- # (( part++ )) 00:08:47.787 11:52:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:47.787 11:52:53 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:08:47.787 11:52:53 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:47.787 11:52:53 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:08:48.721 Creating new GPT entries in memory. 00:08:48.721 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:48.721 other utilities. 00:08:48.721 11:52:54 -- setup/common.sh@57 -- # (( part = 1 )) 00:08:48.721 11:52:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:48.721 11:52:54 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:48.721 11:52:54 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:48.721 11:52:54 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:50.115 Creating new GPT entries in memory. 00:08:50.115 The operation has completed successfully. 00:08:50.115 11:52:55 -- setup/common.sh@57 -- # (( part++ )) 00:08:50.115 11:52:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:50.115 11:52:55 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:50.115 11:52:55 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:50.115 11:52:55 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:08:51.052 The operation has completed successfully. 00:08:51.052 11:52:56 -- setup/common.sh@57 -- # (( part++ )) 00:08:51.052 11:52:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:51.052 11:52:56 -- setup/common.sh@62 -- # wait 64307 00:08:51.052 11:52:56 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:08:51.052 11:52:56 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:51.052 11:52:56 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:51.052 11:52:56 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:08:51.052 11:52:56 -- setup/devices.sh@160 -- # for t in {1..5} 00:08:51.052 11:52:56 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:51.052 11:52:56 -- setup/devices.sh@161 -- # break 00:08:51.052 11:52:56 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:51.052 11:52:56 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:08:51.052 11:52:56 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:08:51.052 11:52:56 -- setup/devices.sh@166 -- # dm=dm-0 00:08:51.052 11:52:56 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:08:51.052 11:52:56 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:08:51.052 11:52:56 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:51.052 11:52:56 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:08:51.052 11:52:56 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:51.052 11:52:56 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:51.052 11:52:56 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:08:51.052 11:52:56 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:51.052 11:52:56 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:51.052 11:52:56 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:51.052 11:52:56 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:08:51.052 11:52:56 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:51.052 11:52:56 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:51.052 11:52:56 -- setup/devices.sh@53 -- # local found=0 00:08:51.052 11:52:56 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:08:51.052 11:52:56 -- setup/devices.sh@56 -- # : 00:08:51.052 11:52:56 -- setup/devices.sh@59 -- # local pci status 00:08:51.052 11:52:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.052 11:52:56 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:51.052 11:52:56 -- setup/devices.sh@47 -- # setup output config 00:08:51.052 11:52:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:51.052 11:52:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:51.311 11:52:56 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:51.311 11:52:56 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:08:51.311 11:52:56 -- setup/devices.sh@63 -- # found=1 00:08:51.311 11:52:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.311 11:52:56 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:51.311 11:52:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.570 11:52:56 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:51.570 11:52:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.570 11:52:56 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:51.570 11:52:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.570 11:52:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:51.570 11:52:57 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:08:51.570 11:52:57 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:51.570 11:52:57 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:08:51.570 11:52:57 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:51.570 11:52:57 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:51.829 11:52:57 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:08:51.829 11:52:57 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:08:51.829 11:52:57 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:08:51.829 11:52:57 -- setup/devices.sh@50 -- # local mount_point= 00:08:51.829 11:52:57 -- setup/devices.sh@51 -- # local test_file= 00:08:51.829 11:52:57 -- setup/devices.sh@53 -- # local found=0 00:08:51.829 11:52:57 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:51.829 11:52:57 -- setup/devices.sh@59 -- # local pci status 00:08:51.829 11:52:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.829 11:52:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:08:51.829 11:52:57 -- setup/devices.sh@47 -- # setup output config 00:08:51.829 11:52:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:51.829 11:52:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:51.829 11:52:57 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:51.829 11:52:57 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:08:51.829 11:52:57 -- setup/devices.sh@63 -- # found=1 00:08:51.829 11:52:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.829 11:52:57 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:51.829 11:52:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:52.088 11:52:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:52.088 11:52:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:52.347 11:52:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:08:52.347 11:52:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:52.347 11:52:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:52.347 11:52:57 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:52.347 11:52:57 -- setup/devices.sh@68 -- # return 0 00:08:52.347 11:52:57 -- setup/devices.sh@187 -- # cleanup_dm 00:08:52.347 11:52:57 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:52.347 11:52:57 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:52.347 11:52:57 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:08:52.347 11:52:57 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:52.347 11:52:57 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:08:52.347 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:52.347 11:52:57 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:52.347 11:52:57 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:08:52.347 00:08:52.347 real 0m4.608s 00:08:52.347 user 0m0.723s 00:08:52.347 sys 0m0.799s 00:08:52.347 11:52:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:52.347 11:52:57 -- common/autotest_common.sh@10 -- # set +x 00:08:52.347 ************************************ 00:08:52.347 END TEST dm_mount 00:08:52.347 ************************************ 00:08:52.347 11:52:57 -- setup/devices.sh@1 -- # cleanup 00:08:52.347 11:52:57 -- setup/devices.sh@11 -- # cleanup_nvme 00:08:52.347 11:52:57 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:52.347 11:52:57 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:52.347 11:52:57 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:52.347 11:52:57 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:52.347 11:52:57 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:52.915 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:52.915 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:52.915 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:52.915 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:52.915 11:52:58 -- setup/devices.sh@12 -- # cleanup_dm 00:08:52.915 11:52:58 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:52.915 11:52:58 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:52.915 11:52:58 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:52.915 11:52:58 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:52.915 11:52:58 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:08:52.915 11:52:58 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:08:52.915 00:08:52.915 real 0m10.976s 00:08:52.915 user 0m2.544s 00:08:52.915 sys 0m2.712s 00:08:52.915 11:52:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:52.915 ************************************ 00:08:52.915 END TEST devices 00:08:52.915 ************************************ 00:08:52.915 11:52:58 -- common/autotest_common.sh@10 -- # set +x 00:08:52.915 00:08:52.915 real 0m23.377s 00:08:52.915 user 0m8.057s 00:08:52.915 sys 0m9.632s 00:08:52.915 11:52:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:52.916 11:52:58 -- common/autotest_common.sh@10 -- # set +x 00:08:52.916 ************************************ 00:08:52.916 END TEST setup.sh 00:08:52.916 ************************************ 00:08:52.916 11:52:58 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:52.916 Hugepages 00:08:52.916 node hugesize free / total 00:08:52.916 node0 1048576kB 0 / 0 00:08:52.916 node0 2048kB 2048 / 2048 00:08:52.916 00:08:52.916 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:53.174 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:53.174 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:53.174 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:53.174 11:52:58 -- spdk/autotest.sh@128 -- # uname -s 00:08:53.174 11:52:58 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:08:53.174 11:52:58 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:08:53.174 11:52:58 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:54.111 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.111 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:54.111 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:54.111 11:52:59 -- common/autotest_common.sh@1527 -- # sleep 1 00:08:55.047 11:53:00 -- common/autotest_common.sh@1528 -- # bdfs=() 00:08:55.047 11:53:00 -- common/autotest_common.sh@1528 -- # local bdfs 00:08:55.047 11:53:00 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:08:55.047 11:53:00 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:08:55.047 11:53:00 -- common/autotest_common.sh@1508 -- # bdfs=() 00:08:55.047 11:53:00 -- common/autotest_common.sh@1508 -- # local bdfs 00:08:55.047 11:53:00 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:55.047 11:53:00 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:55.047 11:53:00 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:08:55.306 11:53:00 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:08:55.306 11:53:00 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:08:55.306 11:53:00 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:55.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:55.565 Waiting for block devices as requested 00:08:55.565 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:55.825 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:55.825 11:53:01 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:08:55.825 11:53:01 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:08:55.825 11:53:01 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:55.825 11:53:01 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:08:55.825 11:53:01 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:08:55.825 11:53:01 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:08:55.825 11:53:01 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:08:55.825 11:53:01 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:08:55.825 11:53:01 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:08:55.825 11:53:01 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:08:55.825 11:53:01 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:55.825 11:53:01 -- common/autotest_common.sh@1540 -- # grep oacs 00:08:55.825 11:53:01 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:55.825 11:53:01 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:08:55.825 11:53:01 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:08:55.825 11:53:01 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:08:55.825 11:53:01 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:08:55.825 11:53:01 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:08:55.825 11:53:01 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:08:55.825 11:53:01 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:08:55.825 11:53:01 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:08:55.825 11:53:01 -- common/autotest_common.sh@1552 -- # continue 00:08:55.825 11:53:01 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:08:55.825 11:53:01 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:08:55.825 11:53:01 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:08:55.825 11:53:01 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:55.825 11:53:01 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:08:55.825 11:53:01 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:08:55.825 11:53:01 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:08:55.825 11:53:01 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:08:55.825 11:53:01 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:08:55.825 11:53:01 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:08:55.825 11:53:01 -- common/autotest_common.sh@1540 -- # grep oacs 00:08:55.825 11:53:01 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:55.825 11:53:01 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:55.825 11:53:01 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:08:55.825 11:53:01 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:08:55.825 11:53:01 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:08:55.825 11:53:01 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:08:55.825 11:53:01 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:08:55.825 11:53:01 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:08:55.825 11:53:01 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:08:55.825 11:53:01 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:08:55.825 11:53:01 -- common/autotest_common.sh@1552 -- # continue 00:08:55.825 11:53:01 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:08:55.825 11:53:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:55.825 11:53:01 -- common/autotest_common.sh@10 -- # set +x 00:08:55.825 11:53:01 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:08:55.825 11:53:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:55.825 11:53:01 -- common/autotest_common.sh@10 -- # set +x 00:08:55.825 11:53:01 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:56.823 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:56.823 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:56.823 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:56.823 11:53:02 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:08:56.823 11:53:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:56.823 11:53:02 -- common/autotest_common.sh@10 -- # set +x 00:08:56.823 11:53:02 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:08:56.823 11:53:02 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:08:56.823 11:53:02 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:08:56.823 11:53:02 -- common/autotest_common.sh@1572 -- # bdfs=() 00:08:56.823 11:53:02 -- common/autotest_common.sh@1572 -- # local bdfs 00:08:56.823 11:53:02 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:08:56.823 11:53:02 -- common/autotest_common.sh@1508 -- # bdfs=() 00:08:56.823 11:53:02 -- common/autotest_common.sh@1508 -- # local bdfs 00:08:56.823 11:53:02 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:56.823 11:53:02 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:56.823 11:53:02 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:08:57.083 11:53:02 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:08:57.083 11:53:02 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:08:57.083 11:53:02 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:08:57.083 11:53:02 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:08:57.083 11:53:02 -- common/autotest_common.sh@1575 -- # device=0x0010 00:08:57.083 11:53:02 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:57.083 11:53:02 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:08:57.083 11:53:02 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:08:57.083 11:53:02 -- common/autotest_common.sh@1575 -- # device=0x0010 00:08:57.083 11:53:02 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:57.083 11:53:02 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:08:57.083 11:53:02 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:08:57.083 11:53:02 -- common/autotest_common.sh@1588 -- # return 0 00:08:57.083 11:53:02 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:08:57.083 11:53:02 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:08:57.083 11:53:02 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:08:57.083 11:53:02 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:08:57.083 11:53:02 -- spdk/autotest.sh@160 -- # timing_enter lib 00:08:57.083 11:53:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:57.083 11:53:02 -- common/autotest_common.sh@10 -- # set +x 00:08:57.083 11:53:02 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:57.083 11:53:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:57.083 11:53:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.083 11:53:02 -- common/autotest_common.sh@10 -- # set +x 00:08:57.083 ************************************ 00:08:57.083 START TEST env 00:08:57.083 ************************************ 00:08:57.083 11:53:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:57.083 * Looking for test storage... 00:08:57.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:57.083 11:53:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:57.083 11:53:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:57.083 11:53:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:57.083 11:53:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:57.083 11:53:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:57.083 11:53:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:57.083 11:53:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:57.083 11:53:02 -- scripts/common.sh@335 -- # IFS=.-: 00:08:57.083 11:53:02 -- scripts/common.sh@335 -- # read -ra ver1 00:08:57.083 11:53:02 -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.083 11:53:02 -- scripts/common.sh@336 -- # read -ra ver2 00:08:57.083 11:53:02 -- scripts/common.sh@337 -- # local 'op=<' 00:08:57.083 11:53:02 -- scripts/common.sh@339 -- # ver1_l=2 00:08:57.083 11:53:02 -- scripts/common.sh@340 -- # ver2_l=1 00:08:57.083 11:53:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:57.083 11:53:02 -- scripts/common.sh@343 -- # case "$op" in 00:08:57.083 11:53:02 -- scripts/common.sh@344 -- # : 1 00:08:57.083 11:53:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:57.083 11:53:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.083 11:53:02 -- scripts/common.sh@364 -- # decimal 1 00:08:57.083 11:53:02 -- scripts/common.sh@352 -- # local d=1 00:08:57.083 11:53:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.083 11:53:02 -- scripts/common.sh@354 -- # echo 1 00:08:57.083 11:53:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:57.083 11:53:02 -- scripts/common.sh@365 -- # decimal 2 00:08:57.083 11:53:02 -- scripts/common.sh@352 -- # local d=2 00:08:57.083 11:53:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.083 11:53:02 -- scripts/common.sh@354 -- # echo 2 00:08:57.083 11:53:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:57.083 11:53:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:57.083 11:53:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:57.083 11:53:02 -- scripts/common.sh@367 -- # return 0 00:08:57.083 11:53:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.083 11:53:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:57.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.083 --rc genhtml_branch_coverage=1 00:08:57.083 --rc genhtml_function_coverage=1 00:08:57.083 --rc genhtml_legend=1 00:08:57.083 --rc geninfo_all_blocks=1 00:08:57.083 --rc geninfo_unexecuted_blocks=1 00:08:57.083 00:08:57.083 ' 00:08:57.083 11:53:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:57.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.083 --rc genhtml_branch_coverage=1 00:08:57.083 --rc genhtml_function_coverage=1 00:08:57.083 --rc genhtml_legend=1 00:08:57.083 --rc geninfo_all_blocks=1 00:08:57.083 --rc geninfo_unexecuted_blocks=1 00:08:57.083 00:08:57.083 ' 00:08:57.083 11:53:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:57.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.083 --rc genhtml_branch_coverage=1 00:08:57.083 --rc genhtml_function_coverage=1 00:08:57.083 --rc genhtml_legend=1 00:08:57.083 --rc geninfo_all_blocks=1 00:08:57.083 --rc geninfo_unexecuted_blocks=1 00:08:57.083 00:08:57.083 ' 00:08:57.083 11:53:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:57.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.083 --rc genhtml_branch_coverage=1 00:08:57.083 --rc genhtml_function_coverage=1 00:08:57.083 --rc genhtml_legend=1 00:08:57.083 --rc geninfo_all_blocks=1 00:08:57.083 --rc geninfo_unexecuted_blocks=1 00:08:57.083 00:08:57.083 ' 00:08:57.083 11:53:02 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:57.083 11:53:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:57.083 11:53:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.083 11:53:02 -- common/autotest_common.sh@10 -- # set +x 00:08:57.083 ************************************ 00:08:57.083 START TEST env_memory 00:08:57.083 ************************************ 00:08:57.083 11:53:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:57.083 00:08:57.083 00:08:57.083 CUnit - A unit testing framework for C - Version 2.1-3 00:08:57.083 http://cunit.sourceforge.net/ 00:08:57.083 00:08:57.083 00:08:57.083 Suite: memory 00:08:57.344 Test: alloc and free memory map ...[2024-11-29 11:53:02.625357] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:57.344 passed 00:08:57.344 Test: mem map translation ...[2024-11-29 11:53:02.656682] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:57.344 [2024-11-29 11:53:02.656741] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:57.344 [2024-11-29 11:53:02.656799] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:57.344 [2024-11-29 11:53:02.656811] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:57.344 passed 00:08:57.344 Test: mem map registration ...[2024-11-29 11:53:02.721455] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:57.344 [2024-11-29 11:53:02.721541] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:57.344 passed 00:08:57.344 Test: mem map adjacent registrations ...passed 00:08:57.344 00:08:57.344 Run Summary: Type Total Ran Passed Failed Inactive 00:08:57.344 suites 1 1 n/a 0 0 00:08:57.344 tests 4 4 4 0 0 00:08:57.344 asserts 152 152 152 0 n/a 00:08:57.344 00:08:57.344 Elapsed time = 0.218 seconds 00:08:57.344 00:08:57.344 real 0m0.236s 00:08:57.344 user 0m0.218s 00:08:57.344 sys 0m0.015s 00:08:57.344 11:53:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.344 11:53:02 -- common/autotest_common.sh@10 -- # set +x 00:08:57.344 ************************************ 00:08:57.344 END TEST env_memory 00:08:57.344 ************************************ 00:08:57.604 11:53:02 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:57.604 11:53:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:57.604 11:53:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.604 11:53:02 -- common/autotest_common.sh@10 -- # set +x 00:08:57.604 ************************************ 00:08:57.604 START TEST env_vtophys 00:08:57.604 ************************************ 00:08:57.604 11:53:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:57.604 EAL: lib.eal log level changed from notice to debug 00:08:57.604 EAL: Detected lcore 0 as core 0 on socket 0 00:08:57.604 EAL: Detected lcore 1 as core 0 on socket 0 00:08:57.604 EAL: Detected lcore 2 as core 0 on socket 0 00:08:57.604 EAL: Detected lcore 3 as core 0 on socket 0 00:08:57.604 EAL: Detected lcore 4 as core 0 on socket 0 00:08:57.604 EAL: Detected lcore 5 as core 0 on socket 0 00:08:57.604 EAL: Detected lcore 6 as core 0 on socket 0 00:08:57.604 EAL: Detected lcore 7 as core 0 on socket 0 00:08:57.604 EAL: Detected lcore 8 as core 0 on socket 0 00:08:57.604 EAL: Detected lcore 9 as core 0 on socket 0 00:08:57.604 EAL: Maximum logical cores by configuration: 128 00:08:57.604 EAL: Detected CPU lcores: 10 00:08:57.604 EAL: Detected NUMA nodes: 1 00:08:57.604 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:08:57.604 EAL: Detected shared linkage of DPDK 00:08:57.604 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:08:57.604 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:08:57.604 EAL: Registered [vdev] bus. 00:08:57.604 EAL: bus.vdev log level changed from disabled to notice 00:08:57.604 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:08:57.604 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:08:57.604 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:08:57.604 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:08:57.604 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:08:57.604 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:08:57.604 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:08:57.604 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:08:57.605 EAL: No shared files mode enabled, IPC will be disabled 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: Selected IOVA mode 'PA' 00:08:57.605 EAL: Probing VFIO support... 00:08:57.605 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:57.605 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:57.605 EAL: Ask a virtual area of 0x2e000 bytes 00:08:57.605 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:57.605 EAL: Setting up physically contiguous memory... 00:08:57.605 EAL: Setting maximum number of open files to 524288 00:08:57.605 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:57.605 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:57.605 EAL: Ask a virtual area of 0x61000 bytes 00:08:57.605 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:57.605 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:57.605 EAL: Ask a virtual area of 0x400000000 bytes 00:08:57.605 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:57.605 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:57.605 EAL: Ask a virtual area of 0x61000 bytes 00:08:57.605 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:57.605 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:57.605 EAL: Ask a virtual area of 0x400000000 bytes 00:08:57.605 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:57.605 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:57.605 EAL: Ask a virtual area of 0x61000 bytes 00:08:57.605 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:57.605 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:57.605 EAL: Ask a virtual area of 0x400000000 bytes 00:08:57.605 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:57.605 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:57.605 EAL: Ask a virtual area of 0x61000 bytes 00:08:57.605 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:57.605 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:57.605 EAL: Ask a virtual area of 0x400000000 bytes 00:08:57.605 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:57.605 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:57.605 EAL: Hugepages will be freed exactly as allocated. 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: TSC frequency is ~2200000 KHz 00:08:57.605 EAL: Main lcore 0 is ready (tid=7f7867a1aa00;cpuset=[0]) 00:08:57.605 EAL: Trying to obtain current memory policy. 00:08:57.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.605 EAL: Restoring previous memory policy: 0 00:08:57.605 EAL: request: mp_malloc_sync 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: Heap on socket 0 was expanded by 2MB 00:08:57.605 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:57.605 EAL: Mem event callback 'spdk:(nil)' registered 00:08:57.605 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:57.605 00:08:57.605 00:08:57.605 CUnit - A unit testing framework for C - Version 2.1-3 00:08:57.605 http://cunit.sourceforge.net/ 00:08:57.605 00:08:57.605 00:08:57.605 Suite: components_suite 00:08:57.605 Test: vtophys_malloc_test ...passed 00:08:57.605 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:57.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.605 EAL: Restoring previous memory policy: 4 00:08:57.605 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.605 EAL: request: mp_malloc_sync 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: Heap on socket 0 was expanded by 4MB 00:08:57.605 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.605 EAL: request: mp_malloc_sync 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: Heap on socket 0 was shrunk by 4MB 00:08:57.605 EAL: Trying to obtain current memory policy. 00:08:57.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.605 EAL: Restoring previous memory policy: 4 00:08:57.605 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.605 EAL: request: mp_malloc_sync 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: Heap on socket 0 was expanded by 6MB 00:08:57.605 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.605 EAL: request: mp_malloc_sync 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: Heap on socket 0 was shrunk by 6MB 00:08:57.605 EAL: Trying to obtain current memory policy. 00:08:57.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.605 EAL: Restoring previous memory policy: 4 00:08:57.605 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.605 EAL: request: mp_malloc_sync 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: Heap on socket 0 was expanded by 10MB 00:08:57.605 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.605 EAL: request: mp_malloc_sync 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: Heap on socket 0 was shrunk by 10MB 00:08:57.605 EAL: Trying to obtain current memory policy. 00:08:57.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.605 EAL: Restoring previous memory policy: 4 00:08:57.605 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.605 EAL: request: mp_malloc_sync 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: Heap on socket 0 was expanded by 18MB 00:08:57.605 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.605 EAL: request: mp_malloc_sync 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: Heap on socket 0 was shrunk by 18MB 00:08:57.605 EAL: Trying to obtain current memory policy. 00:08:57.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.605 EAL: Restoring previous memory policy: 4 00:08:57.605 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.605 EAL: request: mp_malloc_sync 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: Heap on socket 0 was expanded by 34MB 00:08:57.605 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.605 EAL: request: mp_malloc_sync 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: Heap on socket 0 was shrunk by 34MB 00:08:57.605 EAL: Trying to obtain current memory policy. 00:08:57.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.605 EAL: Restoring previous memory policy: 4 00:08:57.605 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.605 EAL: request: mp_malloc_sync 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: Heap on socket 0 was expanded by 66MB 00:08:57.605 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.605 EAL: request: mp_malloc_sync 00:08:57.605 EAL: No shared files mode enabled, IPC is disabled 00:08:57.605 EAL: Heap on socket 0 was shrunk by 66MB 00:08:57.605 EAL: Trying to obtain current memory policy. 00:08:57.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.864 EAL: Restoring previous memory policy: 4 00:08:57.864 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.864 EAL: request: mp_malloc_sync 00:08:57.864 EAL: No shared files mode enabled, IPC is disabled 00:08:57.864 EAL: Heap on socket 0 was expanded by 130MB 00:08:57.864 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.864 EAL: request: mp_malloc_sync 00:08:57.864 EAL: No shared files mode enabled, IPC is disabled 00:08:57.864 EAL: Heap on socket 0 was shrunk by 130MB 00:08:57.864 EAL: Trying to obtain current memory policy. 00:08:57.864 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.864 EAL: Restoring previous memory policy: 4 00:08:57.864 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.864 EAL: request: mp_malloc_sync 00:08:57.864 EAL: No shared files mode enabled, IPC is disabled 00:08:57.864 EAL: Heap on socket 0 was expanded by 258MB 00:08:57.864 EAL: Calling mem event callback 'spdk:(nil)' 00:08:58.123 EAL: request: mp_malloc_sync 00:08:58.123 EAL: No shared files mode enabled, IPC is disabled 00:08:58.123 EAL: Heap on socket 0 was shrunk by 258MB 00:08:58.123 EAL: Trying to obtain current memory policy. 00:08:58.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:58.123 EAL: Restoring previous memory policy: 4 00:08:58.123 EAL: Calling mem event callback 'spdk:(nil)' 00:08:58.123 EAL: request: mp_malloc_sync 00:08:58.123 EAL: No shared files mode enabled, IPC is disabled 00:08:58.123 EAL: Heap on socket 0 was expanded by 514MB 00:08:58.382 EAL: Calling mem event callback 'spdk:(nil)' 00:08:58.641 EAL: request: mp_malloc_sync 00:08:58.641 EAL: No shared files mode enabled, IPC is disabled 00:08:58.641 EAL: Heap on socket 0 was shrunk by 514MB 00:08:58.641 EAL: Trying to obtain current memory policy. 00:08:58.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:58.900 EAL: Restoring previous memory policy: 4 00:08:58.900 EAL: Calling mem event callback 'spdk:(nil)' 00:08:58.900 EAL: request: mp_malloc_sync 00:08:58.900 EAL: No shared files mode enabled, IPC is disabled 00:08:58.900 EAL: Heap on socket 0 was expanded by 1026MB 00:08:59.159 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.418 passed 00:08:59.418 00:08:59.418 Run Summary: Type Total Ran Passed Failed Inactive 00:08:59.418 suites 1 1 n/a 0 0 00:08:59.418 tests 2 2 2 0 0 00:08:59.418 asserts 5218 5218 5218 0 n/a 00:08:59.418 00:08:59.418 Elapsed time = 1.797 seconds 00:08:59.418 EAL: request: mp_malloc_sync 00:08:59.418 EAL: No shared files mode enabled, IPC is disabled 00:08:59.418 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:59.418 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.418 EAL: request: mp_malloc_sync 00:08:59.419 EAL: No shared files mode enabled, IPC is disabled 00:08:59.419 EAL: Heap on socket 0 was shrunk by 2MB 00:08:59.419 EAL: No shared files mode enabled, IPC is disabled 00:08:59.419 EAL: No shared files mode enabled, IPC is disabled 00:08:59.419 EAL: No shared files mode enabled, IPC is disabled 00:08:59.419 00:08:59.419 real 0m1.996s 00:08:59.419 user 0m1.139s 00:08:59.419 sys 0m0.725s 00:08:59.419 11:53:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.419 11:53:04 -- common/autotest_common.sh@10 -- # set +x 00:08:59.419 ************************************ 00:08:59.419 END TEST env_vtophys 00:08:59.419 ************************************ 00:08:59.419 11:53:04 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:59.419 11:53:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:59.419 11:53:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.419 11:53:04 -- common/autotest_common.sh@10 -- # set +x 00:08:59.419 ************************************ 00:08:59.419 START TEST env_pci 00:08:59.419 ************************************ 00:08:59.419 11:53:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:59.679 00:08:59.679 00:08:59.679 CUnit - A unit testing framework for C - Version 2.1-3 00:08:59.679 http://cunit.sourceforge.net/ 00:08:59.679 00:08:59.679 00:08:59.679 Suite: pci 00:08:59.679 Test: pci_hook ...[2024-11-29 11:53:04.933633] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65451 has claimed it 00:08:59.679 passed 00:08:59.679 00:08:59.679 Run Summary: Type Total Ran Passed Failed Inactive 00:08:59.679 suites 1 1 n/a 0 0 00:08:59.679 tests 1 1 1 0 0 00:08:59.679 asserts 25 25 25 0 n/a 00:08:59.679 00:08:59.679 Elapsed time = 0.003 seconds 00:08:59.679 EAL: Cannot find device (10000:00:01.0) 00:08:59.679 EAL: Failed to attach device on primary process 00:08:59.679 00:08:59.679 real 0m0.021s 00:08:59.679 user 0m0.006s 00:08:59.679 sys 0m0.015s 00:08:59.679 11:53:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.679 11:53:04 -- common/autotest_common.sh@10 -- # set +x 00:08:59.679 ************************************ 00:08:59.679 END TEST env_pci 00:08:59.679 ************************************ 00:08:59.679 11:53:04 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:59.679 11:53:04 -- env/env.sh@15 -- # uname 00:08:59.679 11:53:04 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:59.679 11:53:04 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:59.679 11:53:04 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:59.679 11:53:04 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:59.680 11:53:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.680 11:53:04 -- common/autotest_common.sh@10 -- # set +x 00:08:59.680 ************************************ 00:08:59.680 START TEST env_dpdk_post_init 00:08:59.680 ************************************ 00:08:59.680 11:53:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:59.680 EAL: Detected CPU lcores: 10 00:08:59.680 EAL: Detected NUMA nodes: 1 00:08:59.680 EAL: Detected shared linkage of DPDK 00:08:59.680 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:59.680 EAL: Selected IOVA mode 'PA' 00:08:59.680 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:59.680 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:08:59.680 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:08:59.680 Starting DPDK initialization... 00:08:59.680 Starting SPDK post initialization... 00:08:59.680 SPDK NVMe probe 00:08:59.680 Attaching to 0000:00:06.0 00:08:59.680 Attaching to 0000:00:07.0 00:08:59.680 Attached to 0000:00:06.0 00:08:59.680 Attached to 0000:00:07.0 00:08:59.680 Cleaning up... 00:08:59.680 00:08:59.680 real 0m0.183s 00:08:59.680 user 0m0.052s 00:08:59.680 sys 0m0.031s 00:08:59.680 11:53:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.680 11:53:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.680 ************************************ 00:08:59.680 END TEST env_dpdk_post_init 00:08:59.680 ************************************ 00:08:59.939 11:53:05 -- env/env.sh@26 -- # uname 00:08:59.939 11:53:05 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:59.939 11:53:05 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:59.939 11:53:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:59.939 11:53:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.939 11:53:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.939 ************************************ 00:08:59.939 START TEST env_mem_callbacks 00:08:59.939 ************************************ 00:08:59.939 11:53:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:59.939 EAL: Detected CPU lcores: 10 00:08:59.939 EAL: Detected NUMA nodes: 1 00:08:59.939 EAL: Detected shared linkage of DPDK 00:08:59.939 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:59.939 EAL: Selected IOVA mode 'PA' 00:08:59.939 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:59.939 00:08:59.939 00:08:59.939 CUnit - A unit testing framework for C - Version 2.1-3 00:08:59.939 http://cunit.sourceforge.net/ 00:08:59.939 00:08:59.939 00:08:59.939 Suite: memory 00:08:59.939 Test: test ... 00:08:59.939 register 0x200000200000 2097152 00:08:59.939 malloc 3145728 00:08:59.939 register 0x200000400000 4194304 00:08:59.939 buf 0x200000500000 len 3145728 PASSED 00:08:59.939 malloc 64 00:08:59.940 buf 0x2000004fff40 len 64 PASSED 00:08:59.940 malloc 4194304 00:08:59.940 register 0x200000800000 6291456 00:08:59.940 buf 0x200000a00000 len 4194304 PASSED 00:08:59.940 free 0x200000500000 3145728 00:08:59.940 free 0x2000004fff40 64 00:08:59.940 unregister 0x200000400000 4194304 PASSED 00:08:59.940 free 0x200000a00000 4194304 00:08:59.940 unregister 0x200000800000 6291456 PASSED 00:08:59.940 malloc 8388608 00:08:59.940 register 0x200000400000 10485760 00:08:59.940 buf 0x200000600000 len 8388608 PASSED 00:08:59.940 free 0x200000600000 8388608 00:08:59.940 unregister 0x200000400000 10485760 PASSED 00:08:59.940 passed 00:08:59.940 00:08:59.940 Run Summary: Type Total Ran Passed Failed Inactive 00:08:59.940 suites 1 1 n/a 0 0 00:08:59.940 tests 1 1 1 0 0 00:08:59.940 asserts 15 15 15 0 n/a 00:08:59.940 00:08:59.940 Elapsed time = 0.010 seconds 00:08:59.940 00:08:59.940 real 0m0.146s 00:08:59.940 user 0m0.017s 00:08:59.940 sys 0m0.029s 00:08:59.940 11:53:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.940 11:53:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.940 ************************************ 00:08:59.940 END TEST env_mem_callbacks 00:08:59.940 ************************************ 00:08:59.940 00:08:59.940 real 0m3.053s 00:08:59.940 user 0m1.626s 00:08:59.940 sys 0m1.078s 00:08:59.940 11:53:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.940 ************************************ 00:08:59.940 11:53:05 -- common/autotest_common.sh@10 -- # set +x 00:08:59.940 END TEST env 00:08:59.940 ************************************ 00:09:00.198 11:53:05 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:00.198 11:53:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:00.198 11:53:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.198 11:53:05 -- common/autotest_common.sh@10 -- # set +x 00:09:00.198 ************************************ 00:09:00.198 START TEST rpc 00:09:00.198 ************************************ 00:09:00.198 11:53:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:00.198 * Looking for test storage... 00:09:00.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:00.198 11:53:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:00.198 11:53:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:00.198 11:53:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:00.198 11:53:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:00.198 11:53:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:00.198 11:53:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:00.198 11:53:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:00.198 11:53:05 -- scripts/common.sh@335 -- # IFS=.-: 00:09:00.198 11:53:05 -- scripts/common.sh@335 -- # read -ra ver1 00:09:00.198 11:53:05 -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.198 11:53:05 -- scripts/common.sh@336 -- # read -ra ver2 00:09:00.198 11:53:05 -- scripts/common.sh@337 -- # local 'op=<' 00:09:00.198 11:53:05 -- scripts/common.sh@339 -- # ver1_l=2 00:09:00.198 11:53:05 -- scripts/common.sh@340 -- # ver2_l=1 00:09:00.198 11:53:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:00.198 11:53:05 -- scripts/common.sh@343 -- # case "$op" in 00:09:00.198 11:53:05 -- scripts/common.sh@344 -- # : 1 00:09:00.198 11:53:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:00.198 11:53:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.198 11:53:05 -- scripts/common.sh@364 -- # decimal 1 00:09:00.198 11:53:05 -- scripts/common.sh@352 -- # local d=1 00:09:00.198 11:53:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.198 11:53:05 -- scripts/common.sh@354 -- # echo 1 00:09:00.198 11:53:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:00.198 11:53:05 -- scripts/common.sh@365 -- # decimal 2 00:09:00.198 11:53:05 -- scripts/common.sh@352 -- # local d=2 00:09:00.198 11:53:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.198 11:53:05 -- scripts/common.sh@354 -- # echo 2 00:09:00.198 11:53:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:00.198 11:53:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:00.198 11:53:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:00.198 11:53:05 -- scripts/common.sh@367 -- # return 0 00:09:00.198 11:53:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.198 11:53:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:00.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.198 --rc genhtml_branch_coverage=1 00:09:00.198 --rc genhtml_function_coverage=1 00:09:00.198 --rc genhtml_legend=1 00:09:00.198 --rc geninfo_all_blocks=1 00:09:00.198 --rc geninfo_unexecuted_blocks=1 00:09:00.198 00:09:00.198 ' 00:09:00.198 11:53:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:00.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.198 --rc genhtml_branch_coverage=1 00:09:00.198 --rc genhtml_function_coverage=1 00:09:00.198 --rc genhtml_legend=1 00:09:00.198 --rc geninfo_all_blocks=1 00:09:00.198 --rc geninfo_unexecuted_blocks=1 00:09:00.198 00:09:00.198 ' 00:09:00.198 11:53:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:00.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.198 --rc genhtml_branch_coverage=1 00:09:00.198 --rc genhtml_function_coverage=1 00:09:00.198 --rc genhtml_legend=1 00:09:00.198 --rc geninfo_all_blocks=1 00:09:00.198 --rc geninfo_unexecuted_blocks=1 00:09:00.198 00:09:00.198 ' 00:09:00.198 11:53:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:00.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.198 --rc genhtml_branch_coverage=1 00:09:00.198 --rc genhtml_function_coverage=1 00:09:00.198 --rc genhtml_legend=1 00:09:00.198 --rc geninfo_all_blocks=1 00:09:00.198 --rc geninfo_unexecuted_blocks=1 00:09:00.198 00:09:00.198 ' 00:09:00.198 11:53:05 -- rpc/rpc.sh@65 -- # spdk_pid=65573 00:09:00.198 11:53:05 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:00.198 11:53:05 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:00.198 11:53:05 -- rpc/rpc.sh@67 -- # waitforlisten 65573 00:09:00.198 11:53:05 -- common/autotest_common.sh@829 -- # '[' -z 65573 ']' 00:09:00.198 11:53:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.198 11:53:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.198 11:53:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.198 11:53:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.198 11:53:05 -- common/autotest_common.sh@10 -- # set +x 00:09:00.456 [2024-11-29 11:53:05.741936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:00.456 [2024-11-29 11:53:05.742074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65573 ] 00:09:00.456 [2024-11-29 11:53:05.883467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.715 [2024-11-29 11:53:05.999716] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:00.715 [2024-11-29 11:53:05.999893] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:00.715 [2024-11-29 11:53:05.999912] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65573' to capture a snapshot of events at runtime. 00:09:00.715 [2024-11-29 11:53:05.999924] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65573 for offline analysis/debug. 00:09:00.715 [2024-11-29 11:53:05.999956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.283 11:53:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.283 11:53:06 -- common/autotest_common.sh@862 -- # return 0 00:09:01.283 11:53:06 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:01.283 11:53:06 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:01.283 11:53:06 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:01.283 11:53:06 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:01.283 11:53:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:01.283 11:53:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.283 11:53:06 -- common/autotest_common.sh@10 -- # set +x 00:09:01.283 ************************************ 00:09:01.283 START TEST rpc_integrity 00:09:01.283 ************************************ 00:09:01.283 11:53:06 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:09:01.283 11:53:06 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:01.283 11:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.283 11:53:06 -- common/autotest_common.sh@10 -- # set +x 00:09:01.283 11:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.283 11:53:06 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:01.283 11:53:06 -- rpc/rpc.sh@13 -- # jq length 00:09:01.542 11:53:06 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:01.542 11:53:06 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:01.542 11:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.542 11:53:06 -- common/autotest_common.sh@10 -- # set +x 00:09:01.542 11:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.542 11:53:06 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:01.542 11:53:06 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:01.542 11:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.542 11:53:06 -- common/autotest_common.sh@10 -- # set +x 00:09:01.542 11:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.542 11:53:06 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:01.542 { 00:09:01.542 "name": "Malloc0", 00:09:01.542 "aliases": [ 00:09:01.542 "56ff7727-7292-430c-abc5-1ade6b047b0d" 00:09:01.542 ], 00:09:01.542 "product_name": "Malloc disk", 00:09:01.542 "block_size": 512, 00:09:01.542 "num_blocks": 16384, 00:09:01.542 "uuid": "56ff7727-7292-430c-abc5-1ade6b047b0d", 00:09:01.542 "assigned_rate_limits": { 00:09:01.542 "rw_ios_per_sec": 0, 00:09:01.542 "rw_mbytes_per_sec": 0, 00:09:01.542 "r_mbytes_per_sec": 0, 00:09:01.542 "w_mbytes_per_sec": 0 00:09:01.542 }, 00:09:01.542 "claimed": false, 00:09:01.542 "zoned": false, 00:09:01.542 "supported_io_types": { 00:09:01.542 "read": true, 00:09:01.542 "write": true, 00:09:01.542 "unmap": true, 00:09:01.542 "write_zeroes": true, 00:09:01.542 "flush": true, 00:09:01.542 "reset": true, 00:09:01.542 "compare": false, 00:09:01.542 "compare_and_write": false, 00:09:01.542 "abort": true, 00:09:01.542 "nvme_admin": false, 00:09:01.542 "nvme_io": false 00:09:01.542 }, 00:09:01.542 "memory_domains": [ 00:09:01.542 { 00:09:01.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.542 "dma_device_type": 2 00:09:01.542 } 00:09:01.542 ], 00:09:01.542 "driver_specific": {} 00:09:01.542 } 00:09:01.542 ]' 00:09:01.542 11:53:06 -- rpc/rpc.sh@17 -- # jq length 00:09:01.542 11:53:06 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:01.542 11:53:06 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:01.542 11:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.542 11:53:06 -- common/autotest_common.sh@10 -- # set +x 00:09:01.542 [2024-11-29 11:53:06.934179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:01.542 [2024-11-29 11:53:06.934272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.542 [2024-11-29 11:53:06.934304] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19b8030 00:09:01.542 [2024-11-29 11:53:06.934314] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.542 [2024-11-29 11:53:06.935891] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.542 [2024-11-29 11:53:06.935958] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:01.542 Passthru0 00:09:01.542 11:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.542 11:53:06 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:01.542 11:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.542 11:53:06 -- common/autotest_common.sh@10 -- # set +x 00:09:01.542 11:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.542 11:53:06 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:01.542 { 00:09:01.542 "name": "Malloc0", 00:09:01.542 "aliases": [ 00:09:01.543 "56ff7727-7292-430c-abc5-1ade6b047b0d" 00:09:01.543 ], 00:09:01.543 "product_name": "Malloc disk", 00:09:01.543 "block_size": 512, 00:09:01.543 "num_blocks": 16384, 00:09:01.543 "uuid": "56ff7727-7292-430c-abc5-1ade6b047b0d", 00:09:01.543 "assigned_rate_limits": { 00:09:01.543 "rw_ios_per_sec": 0, 00:09:01.543 "rw_mbytes_per_sec": 0, 00:09:01.543 "r_mbytes_per_sec": 0, 00:09:01.543 "w_mbytes_per_sec": 0 00:09:01.543 }, 00:09:01.543 "claimed": true, 00:09:01.543 "claim_type": "exclusive_write", 00:09:01.543 "zoned": false, 00:09:01.543 "supported_io_types": { 00:09:01.543 "read": true, 00:09:01.543 "write": true, 00:09:01.543 "unmap": true, 00:09:01.543 "write_zeroes": true, 00:09:01.543 "flush": true, 00:09:01.543 "reset": true, 00:09:01.543 "compare": false, 00:09:01.543 "compare_and_write": false, 00:09:01.543 "abort": true, 00:09:01.543 "nvme_admin": false, 00:09:01.543 "nvme_io": false 00:09:01.543 }, 00:09:01.543 "memory_domains": [ 00:09:01.543 { 00:09:01.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.543 "dma_device_type": 2 00:09:01.543 } 00:09:01.543 ], 00:09:01.543 "driver_specific": {} 00:09:01.543 }, 00:09:01.543 { 00:09:01.543 "name": "Passthru0", 00:09:01.543 "aliases": [ 00:09:01.543 "1cb44fd2-3416-561e-a65a-75efbffadb9c" 00:09:01.543 ], 00:09:01.543 "product_name": "passthru", 00:09:01.543 "block_size": 512, 00:09:01.543 "num_blocks": 16384, 00:09:01.543 "uuid": "1cb44fd2-3416-561e-a65a-75efbffadb9c", 00:09:01.543 "assigned_rate_limits": { 00:09:01.543 "rw_ios_per_sec": 0, 00:09:01.543 "rw_mbytes_per_sec": 0, 00:09:01.543 "r_mbytes_per_sec": 0, 00:09:01.543 "w_mbytes_per_sec": 0 00:09:01.543 }, 00:09:01.543 "claimed": false, 00:09:01.543 "zoned": false, 00:09:01.543 "supported_io_types": { 00:09:01.543 "read": true, 00:09:01.543 "write": true, 00:09:01.543 "unmap": true, 00:09:01.543 "write_zeroes": true, 00:09:01.543 "flush": true, 00:09:01.543 "reset": true, 00:09:01.543 "compare": false, 00:09:01.543 "compare_and_write": false, 00:09:01.543 "abort": true, 00:09:01.543 "nvme_admin": false, 00:09:01.543 "nvme_io": false 00:09:01.543 }, 00:09:01.543 "memory_domains": [ 00:09:01.543 { 00:09:01.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.543 "dma_device_type": 2 00:09:01.543 } 00:09:01.543 ], 00:09:01.543 "driver_specific": { 00:09:01.543 "passthru": { 00:09:01.543 "name": "Passthru0", 00:09:01.543 "base_bdev_name": "Malloc0" 00:09:01.543 } 00:09:01.543 } 00:09:01.543 } 00:09:01.543 ]' 00:09:01.543 11:53:06 -- rpc/rpc.sh@21 -- # jq length 00:09:01.543 11:53:07 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:01.543 11:53:07 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:01.543 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.543 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:01.543 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.543 11:53:07 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:01.543 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.543 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:01.543 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.543 11:53:07 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:01.543 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.543 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:01.543 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.803 11:53:07 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:01.803 11:53:07 -- rpc/rpc.sh@26 -- # jq length 00:09:01.803 11:53:07 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:01.803 00:09:01.803 real 0m0.326s 00:09:01.803 user 0m0.206s 00:09:01.803 sys 0m0.047s 00:09:01.803 11:53:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:01.803 ************************************ 00:09:01.803 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:01.803 END TEST rpc_integrity 00:09:01.803 ************************************ 00:09:01.803 11:53:07 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:01.803 11:53:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:01.803 11:53:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.803 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:01.803 ************************************ 00:09:01.803 START TEST rpc_plugins 00:09:01.803 ************************************ 00:09:01.803 11:53:07 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:09:01.803 11:53:07 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:01.803 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.803 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:01.803 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.803 11:53:07 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:01.803 11:53:07 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:01.803 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.803 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:01.803 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.803 11:53:07 -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:01.803 { 00:09:01.803 "name": "Malloc1", 00:09:01.803 "aliases": [ 00:09:01.803 "377ce50b-b75b-4075-973a-82b744556740" 00:09:01.803 ], 00:09:01.803 "product_name": "Malloc disk", 00:09:01.803 "block_size": 4096, 00:09:01.803 "num_blocks": 256, 00:09:01.803 "uuid": "377ce50b-b75b-4075-973a-82b744556740", 00:09:01.803 "assigned_rate_limits": { 00:09:01.803 "rw_ios_per_sec": 0, 00:09:01.803 "rw_mbytes_per_sec": 0, 00:09:01.803 "r_mbytes_per_sec": 0, 00:09:01.803 "w_mbytes_per_sec": 0 00:09:01.803 }, 00:09:01.803 "claimed": false, 00:09:01.803 "zoned": false, 00:09:01.803 "supported_io_types": { 00:09:01.803 "read": true, 00:09:01.803 "write": true, 00:09:01.803 "unmap": true, 00:09:01.803 "write_zeroes": true, 00:09:01.803 "flush": true, 00:09:01.803 "reset": true, 00:09:01.803 "compare": false, 00:09:01.803 "compare_and_write": false, 00:09:01.803 "abort": true, 00:09:01.803 "nvme_admin": false, 00:09:01.803 "nvme_io": false 00:09:01.803 }, 00:09:01.803 "memory_domains": [ 00:09:01.803 { 00:09:01.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.803 "dma_device_type": 2 00:09:01.803 } 00:09:01.803 ], 00:09:01.803 "driver_specific": {} 00:09:01.803 } 00:09:01.803 ]' 00:09:01.803 11:53:07 -- rpc/rpc.sh@32 -- # jq length 00:09:01.803 11:53:07 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:01.803 11:53:07 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:01.803 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.803 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:01.803 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.803 11:53:07 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:01.803 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.803 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:01.803 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.803 11:53:07 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:01.803 11:53:07 -- rpc/rpc.sh@36 -- # jq length 00:09:02.062 11:53:07 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:02.062 00:09:02.062 real 0m0.162s 00:09:02.062 user 0m0.101s 00:09:02.062 sys 0m0.023s 00:09:02.062 11:53:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.062 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.062 ************************************ 00:09:02.062 END TEST rpc_plugins 00:09:02.062 ************************************ 00:09:02.062 11:53:07 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:02.062 11:53:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:02.062 11:53:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.062 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.062 ************************************ 00:09:02.062 START TEST rpc_trace_cmd_test 00:09:02.062 ************************************ 00:09:02.062 11:53:07 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:09:02.062 11:53:07 -- rpc/rpc.sh@40 -- # local info 00:09:02.062 11:53:07 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:02.062 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.062 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.062 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.062 11:53:07 -- rpc/rpc.sh@42 -- # info='{ 00:09:02.062 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65573", 00:09:02.062 "tpoint_group_mask": "0x8", 00:09:02.062 "iscsi_conn": { 00:09:02.062 "mask": "0x2", 00:09:02.062 "tpoint_mask": "0x0" 00:09:02.062 }, 00:09:02.062 "scsi": { 00:09:02.062 "mask": "0x4", 00:09:02.062 "tpoint_mask": "0x0" 00:09:02.062 }, 00:09:02.062 "bdev": { 00:09:02.062 "mask": "0x8", 00:09:02.062 "tpoint_mask": "0xffffffffffffffff" 00:09:02.062 }, 00:09:02.062 "nvmf_rdma": { 00:09:02.062 "mask": "0x10", 00:09:02.062 "tpoint_mask": "0x0" 00:09:02.063 }, 00:09:02.063 "nvmf_tcp": { 00:09:02.063 "mask": "0x20", 00:09:02.063 "tpoint_mask": "0x0" 00:09:02.063 }, 00:09:02.063 "ftl": { 00:09:02.063 "mask": "0x40", 00:09:02.063 "tpoint_mask": "0x0" 00:09:02.063 }, 00:09:02.063 "blobfs": { 00:09:02.063 "mask": "0x80", 00:09:02.063 "tpoint_mask": "0x0" 00:09:02.063 }, 00:09:02.063 "dsa": { 00:09:02.063 "mask": "0x200", 00:09:02.063 "tpoint_mask": "0x0" 00:09:02.063 }, 00:09:02.063 "thread": { 00:09:02.063 "mask": "0x400", 00:09:02.063 "tpoint_mask": "0x0" 00:09:02.063 }, 00:09:02.063 "nvme_pcie": { 00:09:02.063 "mask": "0x800", 00:09:02.063 "tpoint_mask": "0x0" 00:09:02.063 }, 00:09:02.063 "iaa": { 00:09:02.063 "mask": "0x1000", 00:09:02.063 "tpoint_mask": "0x0" 00:09:02.063 }, 00:09:02.063 "nvme_tcp": { 00:09:02.063 "mask": "0x2000", 00:09:02.063 "tpoint_mask": "0x0" 00:09:02.063 }, 00:09:02.063 "bdev_nvme": { 00:09:02.063 "mask": "0x4000", 00:09:02.063 "tpoint_mask": "0x0" 00:09:02.063 } 00:09:02.063 }' 00:09:02.063 11:53:07 -- rpc/rpc.sh@43 -- # jq length 00:09:02.063 11:53:07 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:09:02.063 11:53:07 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:02.063 11:53:07 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:02.063 11:53:07 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:02.063 11:53:07 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:02.063 11:53:07 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:02.322 11:53:07 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:02.322 11:53:07 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:02.322 11:53:07 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:02.322 00:09:02.322 real 0m0.268s 00:09:02.322 user 0m0.234s 00:09:02.322 sys 0m0.028s 00:09:02.322 11:53:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.322 ************************************ 00:09:02.322 END TEST rpc_trace_cmd_test 00:09:02.322 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.322 ************************************ 00:09:02.322 11:53:07 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:02.322 11:53:07 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:02.322 11:53:07 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:02.322 11:53:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:02.322 11:53:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.322 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.322 ************************************ 00:09:02.322 START TEST rpc_daemon_integrity 00:09:02.322 ************************************ 00:09:02.322 11:53:07 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:09:02.322 11:53:07 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:02.322 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.322 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.322 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.322 11:53:07 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:02.322 11:53:07 -- rpc/rpc.sh@13 -- # jq length 00:09:02.322 11:53:07 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:02.322 11:53:07 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:02.322 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.322 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.322 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.322 11:53:07 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:02.322 11:53:07 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:02.322 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.322 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.322 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.322 11:53:07 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:02.322 { 00:09:02.322 "name": "Malloc2", 00:09:02.322 "aliases": [ 00:09:02.322 "ed58899f-cf68-4a66-b05b-b61256c0369e" 00:09:02.322 ], 00:09:02.322 "product_name": "Malloc disk", 00:09:02.322 "block_size": 512, 00:09:02.322 "num_blocks": 16384, 00:09:02.322 "uuid": "ed58899f-cf68-4a66-b05b-b61256c0369e", 00:09:02.322 "assigned_rate_limits": { 00:09:02.322 "rw_ios_per_sec": 0, 00:09:02.322 "rw_mbytes_per_sec": 0, 00:09:02.322 "r_mbytes_per_sec": 0, 00:09:02.322 "w_mbytes_per_sec": 0 00:09:02.322 }, 00:09:02.322 "claimed": false, 00:09:02.322 "zoned": false, 00:09:02.322 "supported_io_types": { 00:09:02.322 "read": true, 00:09:02.322 "write": true, 00:09:02.322 "unmap": true, 00:09:02.322 "write_zeroes": true, 00:09:02.322 "flush": true, 00:09:02.322 "reset": true, 00:09:02.322 "compare": false, 00:09:02.322 "compare_and_write": false, 00:09:02.322 "abort": true, 00:09:02.322 "nvme_admin": false, 00:09:02.322 "nvme_io": false 00:09:02.322 }, 00:09:02.322 "memory_domains": [ 00:09:02.322 { 00:09:02.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.322 "dma_device_type": 2 00:09:02.322 } 00:09:02.322 ], 00:09:02.322 "driver_specific": {} 00:09:02.322 } 00:09:02.322 ]' 00:09:02.322 11:53:07 -- rpc/rpc.sh@17 -- # jq length 00:09:02.581 11:53:07 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:02.581 11:53:07 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:02.581 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.581 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.581 [2024-11-29 11:53:07.844259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:02.581 [2024-11-29 11:53:07.844323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.581 [2024-11-29 11:53:07.844355] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b56fe0 00:09:02.581 [2024-11-29 11:53:07.844363] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.581 [2024-11-29 11:53:07.845737] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.581 [2024-11-29 11:53:07.845785] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:02.581 Passthru0 00:09:02.581 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.581 11:53:07 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:02.581 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.581 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.581 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.581 11:53:07 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:02.581 { 00:09:02.581 "name": "Malloc2", 00:09:02.581 "aliases": [ 00:09:02.581 "ed58899f-cf68-4a66-b05b-b61256c0369e" 00:09:02.581 ], 00:09:02.581 "product_name": "Malloc disk", 00:09:02.581 "block_size": 512, 00:09:02.581 "num_blocks": 16384, 00:09:02.581 "uuid": "ed58899f-cf68-4a66-b05b-b61256c0369e", 00:09:02.581 "assigned_rate_limits": { 00:09:02.581 "rw_ios_per_sec": 0, 00:09:02.581 "rw_mbytes_per_sec": 0, 00:09:02.581 "r_mbytes_per_sec": 0, 00:09:02.581 "w_mbytes_per_sec": 0 00:09:02.581 }, 00:09:02.581 "claimed": true, 00:09:02.581 "claim_type": "exclusive_write", 00:09:02.581 "zoned": false, 00:09:02.581 "supported_io_types": { 00:09:02.581 "read": true, 00:09:02.581 "write": true, 00:09:02.581 "unmap": true, 00:09:02.581 "write_zeroes": true, 00:09:02.581 "flush": true, 00:09:02.581 "reset": true, 00:09:02.581 "compare": false, 00:09:02.581 "compare_and_write": false, 00:09:02.581 "abort": true, 00:09:02.581 "nvme_admin": false, 00:09:02.581 "nvme_io": false 00:09:02.581 }, 00:09:02.581 "memory_domains": [ 00:09:02.581 { 00:09:02.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.581 "dma_device_type": 2 00:09:02.581 } 00:09:02.581 ], 00:09:02.581 "driver_specific": {} 00:09:02.581 }, 00:09:02.581 { 00:09:02.581 "name": "Passthru0", 00:09:02.582 "aliases": [ 00:09:02.582 "d96eaf22-575f-59b6-9e1c-4db453c08752" 00:09:02.582 ], 00:09:02.582 "product_name": "passthru", 00:09:02.582 "block_size": 512, 00:09:02.582 "num_blocks": 16384, 00:09:02.582 "uuid": "d96eaf22-575f-59b6-9e1c-4db453c08752", 00:09:02.582 "assigned_rate_limits": { 00:09:02.582 "rw_ios_per_sec": 0, 00:09:02.582 "rw_mbytes_per_sec": 0, 00:09:02.582 "r_mbytes_per_sec": 0, 00:09:02.582 "w_mbytes_per_sec": 0 00:09:02.582 }, 00:09:02.582 "claimed": false, 00:09:02.582 "zoned": false, 00:09:02.582 "supported_io_types": { 00:09:02.582 "read": true, 00:09:02.582 "write": true, 00:09:02.582 "unmap": true, 00:09:02.582 "write_zeroes": true, 00:09:02.582 "flush": true, 00:09:02.582 "reset": true, 00:09:02.582 "compare": false, 00:09:02.582 "compare_and_write": false, 00:09:02.582 "abort": true, 00:09:02.582 "nvme_admin": false, 00:09:02.582 "nvme_io": false 00:09:02.582 }, 00:09:02.582 "memory_domains": [ 00:09:02.582 { 00:09:02.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.582 "dma_device_type": 2 00:09:02.582 } 00:09:02.582 ], 00:09:02.582 "driver_specific": { 00:09:02.582 "passthru": { 00:09:02.582 "name": "Passthru0", 00:09:02.582 "base_bdev_name": "Malloc2" 00:09:02.582 } 00:09:02.582 } 00:09:02.582 } 00:09:02.582 ]' 00:09:02.582 11:53:07 -- rpc/rpc.sh@21 -- # jq length 00:09:02.582 11:53:07 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:02.582 11:53:07 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:02.582 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.582 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.582 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.582 11:53:07 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:02.582 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.582 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.582 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.582 11:53:07 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:02.582 11:53:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.582 11:53:07 -- common/autotest_common.sh@10 -- # set +x 00:09:02.582 11:53:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.582 11:53:07 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:02.582 11:53:07 -- rpc/rpc.sh@26 -- # jq length 00:09:02.582 11:53:08 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:02.582 00:09:02.582 real 0m0.319s 00:09:02.582 user 0m0.211s 00:09:02.582 sys 0m0.045s 00:09:02.582 11:53:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.582 ************************************ 00:09:02.582 11:53:08 -- common/autotest_common.sh@10 -- # set +x 00:09:02.582 END TEST rpc_daemon_integrity 00:09:02.582 ************************************ 00:09:02.582 11:53:08 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:02.582 11:53:08 -- rpc/rpc.sh@84 -- # killprocess 65573 00:09:02.582 11:53:08 -- common/autotest_common.sh@936 -- # '[' -z 65573 ']' 00:09:02.582 11:53:08 -- common/autotest_common.sh@940 -- # kill -0 65573 00:09:02.582 11:53:08 -- common/autotest_common.sh@941 -- # uname 00:09:02.582 11:53:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:02.582 11:53:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65573 00:09:02.841 11:53:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:02.841 11:53:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:02.841 killing process with pid 65573 00:09:02.841 11:53:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65573' 00:09:02.841 11:53:08 -- common/autotest_common.sh@955 -- # kill 65573 00:09:02.841 11:53:08 -- common/autotest_common.sh@960 -- # wait 65573 00:09:03.410 00:09:03.410 real 0m3.143s 00:09:03.410 user 0m3.916s 00:09:03.410 sys 0m0.815s 00:09:03.410 11:53:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:03.410 11:53:08 -- common/autotest_common.sh@10 -- # set +x 00:09:03.410 ************************************ 00:09:03.410 END TEST rpc 00:09:03.410 ************************************ 00:09:03.410 11:53:08 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:03.410 11:53:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:03.410 11:53:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.410 11:53:08 -- common/autotest_common.sh@10 -- # set +x 00:09:03.410 ************************************ 00:09:03.410 START TEST rpc_client 00:09:03.410 ************************************ 00:09:03.410 11:53:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:03.410 * Looking for test storage... 00:09:03.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:03.410 11:53:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:03.410 11:53:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:03.410 11:53:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:03.410 11:53:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:03.410 11:53:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:03.410 11:53:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:03.410 11:53:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:03.410 11:53:08 -- scripts/common.sh@335 -- # IFS=.-: 00:09:03.410 11:53:08 -- scripts/common.sh@335 -- # read -ra ver1 00:09:03.410 11:53:08 -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.410 11:53:08 -- scripts/common.sh@336 -- # read -ra ver2 00:09:03.410 11:53:08 -- scripts/common.sh@337 -- # local 'op=<' 00:09:03.410 11:53:08 -- scripts/common.sh@339 -- # ver1_l=2 00:09:03.410 11:53:08 -- scripts/common.sh@340 -- # ver2_l=1 00:09:03.410 11:53:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:03.410 11:53:08 -- scripts/common.sh@343 -- # case "$op" in 00:09:03.410 11:53:08 -- scripts/common.sh@344 -- # : 1 00:09:03.410 11:53:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:03.410 11:53:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.410 11:53:08 -- scripts/common.sh@364 -- # decimal 1 00:09:03.410 11:53:08 -- scripts/common.sh@352 -- # local d=1 00:09:03.410 11:53:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.410 11:53:08 -- scripts/common.sh@354 -- # echo 1 00:09:03.410 11:53:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:03.410 11:53:08 -- scripts/common.sh@365 -- # decimal 2 00:09:03.410 11:53:08 -- scripts/common.sh@352 -- # local d=2 00:09:03.410 11:53:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.410 11:53:08 -- scripts/common.sh@354 -- # echo 2 00:09:03.410 11:53:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:03.410 11:53:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:03.410 11:53:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:03.410 11:53:08 -- scripts/common.sh@367 -- # return 0 00:09:03.410 11:53:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.410 11:53:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:03.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.410 --rc genhtml_branch_coverage=1 00:09:03.410 --rc genhtml_function_coverage=1 00:09:03.410 --rc genhtml_legend=1 00:09:03.410 --rc geninfo_all_blocks=1 00:09:03.410 --rc geninfo_unexecuted_blocks=1 00:09:03.410 00:09:03.410 ' 00:09:03.410 11:53:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:03.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.410 --rc genhtml_branch_coverage=1 00:09:03.410 --rc genhtml_function_coverage=1 00:09:03.410 --rc genhtml_legend=1 00:09:03.410 --rc geninfo_all_blocks=1 00:09:03.410 --rc geninfo_unexecuted_blocks=1 00:09:03.410 00:09:03.410 ' 00:09:03.410 11:53:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:03.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.410 --rc genhtml_branch_coverage=1 00:09:03.410 --rc genhtml_function_coverage=1 00:09:03.410 --rc genhtml_legend=1 00:09:03.410 --rc geninfo_all_blocks=1 00:09:03.410 --rc geninfo_unexecuted_blocks=1 00:09:03.410 00:09:03.410 ' 00:09:03.410 11:53:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:03.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.410 --rc genhtml_branch_coverage=1 00:09:03.410 --rc genhtml_function_coverage=1 00:09:03.410 --rc genhtml_legend=1 00:09:03.410 --rc geninfo_all_blocks=1 00:09:03.410 --rc geninfo_unexecuted_blocks=1 00:09:03.410 00:09:03.410 ' 00:09:03.410 11:53:08 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:03.410 OK 00:09:03.410 11:53:08 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:03.410 00:09:03.410 real 0m0.215s 00:09:03.410 user 0m0.125s 00:09:03.410 sys 0m0.100s 00:09:03.410 11:53:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:03.410 ************************************ 00:09:03.410 END TEST rpc_client 00:09:03.410 ************************************ 00:09:03.410 11:53:08 -- common/autotest_common.sh@10 -- # set +x 00:09:03.670 11:53:08 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:03.670 11:53:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:03.670 11:53:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.670 11:53:08 -- common/autotest_common.sh@10 -- # set +x 00:09:03.670 ************************************ 00:09:03.670 START TEST json_config 00:09:03.670 ************************************ 00:09:03.670 11:53:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:03.670 11:53:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:03.670 11:53:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:03.670 11:53:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:03.670 11:53:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:03.670 11:53:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:03.670 11:53:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:03.670 11:53:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:03.670 11:53:09 -- scripts/common.sh@335 -- # IFS=.-: 00:09:03.670 11:53:09 -- scripts/common.sh@335 -- # read -ra ver1 00:09:03.670 11:53:09 -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.670 11:53:09 -- scripts/common.sh@336 -- # read -ra ver2 00:09:03.670 11:53:09 -- scripts/common.sh@337 -- # local 'op=<' 00:09:03.670 11:53:09 -- scripts/common.sh@339 -- # ver1_l=2 00:09:03.670 11:53:09 -- scripts/common.sh@340 -- # ver2_l=1 00:09:03.670 11:53:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:03.670 11:53:09 -- scripts/common.sh@343 -- # case "$op" in 00:09:03.670 11:53:09 -- scripts/common.sh@344 -- # : 1 00:09:03.670 11:53:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:03.670 11:53:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.670 11:53:09 -- scripts/common.sh@364 -- # decimal 1 00:09:03.670 11:53:09 -- scripts/common.sh@352 -- # local d=1 00:09:03.670 11:53:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.670 11:53:09 -- scripts/common.sh@354 -- # echo 1 00:09:03.670 11:53:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:03.670 11:53:09 -- scripts/common.sh@365 -- # decimal 2 00:09:03.670 11:53:09 -- scripts/common.sh@352 -- # local d=2 00:09:03.670 11:53:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.670 11:53:09 -- scripts/common.sh@354 -- # echo 2 00:09:03.670 11:53:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:03.670 11:53:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:03.670 11:53:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:03.670 11:53:09 -- scripts/common.sh@367 -- # return 0 00:09:03.670 11:53:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.670 11:53:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:03.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.670 --rc genhtml_branch_coverage=1 00:09:03.670 --rc genhtml_function_coverage=1 00:09:03.670 --rc genhtml_legend=1 00:09:03.670 --rc geninfo_all_blocks=1 00:09:03.670 --rc geninfo_unexecuted_blocks=1 00:09:03.670 00:09:03.670 ' 00:09:03.670 11:53:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:03.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.670 --rc genhtml_branch_coverage=1 00:09:03.670 --rc genhtml_function_coverage=1 00:09:03.670 --rc genhtml_legend=1 00:09:03.670 --rc geninfo_all_blocks=1 00:09:03.670 --rc geninfo_unexecuted_blocks=1 00:09:03.670 00:09:03.670 ' 00:09:03.670 11:53:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:03.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.671 --rc genhtml_branch_coverage=1 00:09:03.671 --rc genhtml_function_coverage=1 00:09:03.671 --rc genhtml_legend=1 00:09:03.671 --rc geninfo_all_blocks=1 00:09:03.671 --rc geninfo_unexecuted_blocks=1 00:09:03.671 00:09:03.671 ' 00:09:03.671 11:53:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:03.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.671 --rc genhtml_branch_coverage=1 00:09:03.671 --rc genhtml_function_coverage=1 00:09:03.671 --rc genhtml_legend=1 00:09:03.671 --rc geninfo_all_blocks=1 00:09:03.671 --rc geninfo_unexecuted_blocks=1 00:09:03.671 00:09:03.671 ' 00:09:03.671 11:53:09 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:03.671 11:53:09 -- nvmf/common.sh@7 -- # uname -s 00:09:03.671 11:53:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.671 11:53:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.671 11:53:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.671 11:53:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.671 11:53:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.671 11:53:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.671 11:53:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.671 11:53:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.671 11:53:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.671 11:53:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.671 11:53:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:09:03.671 11:53:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:09:03.671 11:53:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.671 11:53:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.671 11:53:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:03.671 11:53:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.671 11:53:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.671 11:53:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.671 11:53:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.671 11:53:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.671 11:53:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.671 11:53:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.671 11:53:09 -- paths/export.sh@5 -- # export PATH 00:09:03.671 11:53:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.671 11:53:09 -- nvmf/common.sh@46 -- # : 0 00:09:03.671 11:53:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:03.671 11:53:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:03.671 11:53:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:03.671 11:53:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.671 11:53:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.671 11:53:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:03.671 11:53:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:03.671 11:53:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:03.671 11:53:09 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:09:03.671 11:53:09 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:09:03.671 11:53:09 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:09:03.671 11:53:09 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:03.671 11:53:09 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:09:03.671 11:53:09 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:09:03.671 11:53:09 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:03.671 11:53:09 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:09:03.671 11:53:09 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:03.671 11:53:09 -- json_config/json_config.sh@32 -- # declare -A app_params 00:09:03.671 11:53:09 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:03.671 11:53:09 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:09:03.671 11:53:09 -- json_config/json_config.sh@43 -- # last_event_id=0 00:09:03.671 11:53:09 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:03.671 INFO: JSON configuration test init 00:09:03.671 11:53:09 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:09:03.671 11:53:09 -- json_config/json_config.sh@420 -- # json_config_test_init 00:09:03.671 11:53:09 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:09:03.671 11:53:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.671 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:09:03.671 11:53:09 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:09:03.671 11:53:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.671 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:09:03.671 11:53:09 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:09:03.671 11:53:09 -- json_config/json_config.sh@98 -- # local app=target 00:09:03.671 11:53:09 -- json_config/json_config.sh@99 -- # shift 00:09:03.671 11:53:09 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:09:03.671 11:53:09 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:09:03.671 11:53:09 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:09:03.671 11:53:09 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:03.671 11:53:09 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:03.671 11:53:09 -- json_config/json_config.sh@111 -- # app_pid[$app]=65831 00:09:03.671 Waiting for target to run... 00:09:03.671 11:53:09 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:03.671 11:53:09 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:09:03.671 11:53:09 -- json_config/json_config.sh@114 -- # waitforlisten 65831 /var/tmp/spdk_tgt.sock 00:09:03.671 11:53:09 -- common/autotest_common.sh@829 -- # '[' -z 65831 ']' 00:09:03.671 11:53:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:03.671 11:53:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:03.671 11:53:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:03.671 11:53:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.671 11:53:09 -- common/autotest_common.sh@10 -- # set +x 00:09:03.940 [2024-11-29 11:53:09.192006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:03.940 [2024-11-29 11:53:09.192147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65831 ] 00:09:04.515 [2024-11-29 11:53:09.732234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.515 [2024-11-29 11:53:09.824197] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:04.515 [2024-11-29 11:53:09.824402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.773 11:53:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.773 00:09:04.773 11:53:10 -- common/autotest_common.sh@862 -- # return 0 00:09:04.773 11:53:10 -- json_config/json_config.sh@115 -- # echo '' 00:09:04.773 11:53:10 -- json_config/json_config.sh@322 -- # create_accel_config 00:09:04.773 11:53:10 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:09:04.773 11:53:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.773 11:53:10 -- common/autotest_common.sh@10 -- # set +x 00:09:04.773 11:53:10 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:09:04.773 11:53:10 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:09:04.773 11:53:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.773 11:53:10 -- common/autotest_common.sh@10 -- # set +x 00:09:04.773 11:53:10 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:04.773 11:53:10 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:09:04.773 11:53:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:05.342 11:53:10 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:09:05.342 11:53:10 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:09:05.342 11:53:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:05.342 11:53:10 -- common/autotest_common.sh@10 -- # set +x 00:09:05.342 11:53:10 -- json_config/json_config.sh@48 -- # local ret=0 00:09:05.342 11:53:10 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:05.342 11:53:10 -- json_config/json_config.sh@49 -- # local enabled_types 00:09:05.342 11:53:10 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:05.342 11:53:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:05.342 11:53:10 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:05.601 11:53:11 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:05.601 11:53:11 -- json_config/json_config.sh@51 -- # local get_types 00:09:05.601 11:53:11 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:05.601 11:53:11 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:09:05.601 11:53:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:05.601 11:53:11 -- common/autotest_common.sh@10 -- # set +x 00:09:05.601 11:53:11 -- json_config/json_config.sh@58 -- # return 0 00:09:05.601 11:53:11 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:09:05.601 11:53:11 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:09:05.601 11:53:11 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:09:05.601 11:53:11 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:09:05.601 11:53:11 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:09:05.601 11:53:11 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:09:05.601 11:53:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:05.601 11:53:11 -- common/autotest_common.sh@10 -- # set +x 00:09:05.601 11:53:11 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:05.601 11:53:11 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:09:05.601 11:53:11 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:09:05.601 11:53:11 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:05.601 11:53:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:05.859 MallocForNvmf0 00:09:06.119 11:53:11 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:06.119 11:53:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:06.119 MallocForNvmf1 00:09:06.119 11:53:11 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:06.119 11:53:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:06.377 [2024-11-29 11:53:11.859143] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.377 11:53:11 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:06.377 11:53:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:06.636 11:53:12 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:06.636 11:53:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:07.204 11:53:12 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:07.204 11:53:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:07.463 11:53:12 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:07.463 11:53:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:07.722 [2024-11-29 11:53:12.975946] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:07.722 11:53:12 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:09:07.722 11:53:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:07.722 11:53:12 -- common/autotest_common.sh@10 -- # set +x 00:09:07.722 11:53:13 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:09:07.722 11:53:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:07.722 11:53:13 -- common/autotest_common.sh@10 -- # set +x 00:09:07.722 11:53:13 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:09:07.722 11:53:13 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:07.722 11:53:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:07.981 MallocBdevForConfigChangeCheck 00:09:07.981 11:53:13 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:09:07.981 11:53:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:07.981 11:53:13 -- common/autotest_common.sh@10 -- # set +x 00:09:07.981 11:53:13 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:09:07.981 11:53:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:08.549 INFO: shutting down applications... 00:09:08.549 11:53:13 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:09:08.549 11:53:13 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:09:08.549 11:53:13 -- json_config/json_config.sh@431 -- # json_config_clear target 00:09:08.549 11:53:13 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:09:08.549 11:53:13 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:08.808 Calling clear_iscsi_subsystem 00:09:08.808 Calling clear_nvmf_subsystem 00:09:08.808 Calling clear_nbd_subsystem 00:09:08.808 Calling clear_ublk_subsystem 00:09:08.808 Calling clear_vhost_blk_subsystem 00:09:08.808 Calling clear_vhost_scsi_subsystem 00:09:08.808 Calling clear_scheduler_subsystem 00:09:08.808 Calling clear_bdev_subsystem 00:09:08.808 Calling clear_accel_subsystem 00:09:08.808 Calling clear_vmd_subsystem 00:09:08.808 Calling clear_sock_subsystem 00:09:08.808 Calling clear_iobuf_subsystem 00:09:08.808 11:53:14 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:08.808 11:53:14 -- json_config/json_config.sh@396 -- # count=100 00:09:08.808 11:53:14 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:09:08.808 11:53:14 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:08.808 11:53:14 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:08.808 11:53:14 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:09.066 11:53:14 -- json_config/json_config.sh@398 -- # break 00:09:09.066 11:53:14 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:09:09.066 11:53:14 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:09:09.066 11:53:14 -- json_config/json_config.sh@120 -- # local app=target 00:09:09.066 11:53:14 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:09:09.066 11:53:14 -- json_config/json_config.sh@124 -- # [[ -n 65831 ]] 00:09:09.066 11:53:14 -- json_config/json_config.sh@127 -- # kill -SIGINT 65831 00:09:09.066 11:53:14 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:09:09.066 11:53:14 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:09:09.066 11:53:14 -- json_config/json_config.sh@130 -- # kill -0 65831 00:09:09.066 11:53:14 -- json_config/json_config.sh@134 -- # sleep 0.5 00:09:09.635 11:53:15 -- json_config/json_config.sh@129 -- # (( i++ )) 00:09:09.635 11:53:15 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:09:09.635 11:53:15 -- json_config/json_config.sh@130 -- # kill -0 65831 00:09:09.635 11:53:15 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:09:09.635 11:53:15 -- json_config/json_config.sh@132 -- # break 00:09:09.635 11:53:15 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:09:09.635 SPDK target shutdown done 00:09:09.635 11:53:15 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:09:09.635 INFO: relaunching applications... 00:09:09.635 11:53:15 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:09:09.635 11:53:15 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:09.635 11:53:15 -- json_config/json_config.sh@98 -- # local app=target 00:09:09.635 11:53:15 -- json_config/json_config.sh@99 -- # shift 00:09:09.635 11:53:15 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:09:09.635 11:53:15 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:09:09.635 11:53:15 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:09:09.635 11:53:15 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:09.635 11:53:15 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:09.635 11:53:15 -- json_config/json_config.sh@111 -- # app_pid[$app]=66022 00:09:09.635 Waiting for target to run... 00:09:09.635 11:53:15 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:09:09.635 11:53:15 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:09.635 11:53:15 -- json_config/json_config.sh@114 -- # waitforlisten 66022 /var/tmp/spdk_tgt.sock 00:09:09.635 11:53:15 -- common/autotest_common.sh@829 -- # '[' -z 66022 ']' 00:09:09.635 11:53:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:09.635 11:53:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:09.635 11:53:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:09.635 11:53:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.635 11:53:15 -- common/autotest_common.sh@10 -- # set +x 00:09:09.635 [2024-11-29 11:53:15.092877] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:09.635 [2024-11-29 11:53:15.093042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66022 ] 00:09:10.200 [2024-11-29 11:53:15.616105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.200 [2024-11-29 11:53:15.704275] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:10.200 [2024-11-29 11:53:15.704477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.768 [2024-11-29 11:53:16.021661] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.768 [2024-11-29 11:53:16.053750] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:11.336 11:53:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.336 11:53:16 -- common/autotest_common.sh@862 -- # return 0 00:09:11.336 00:09:11.336 11:53:16 -- json_config/json_config.sh@115 -- # echo '' 00:09:11.336 11:53:16 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:09:11.336 INFO: Checking if target configuration is the same... 00:09:11.336 11:53:16 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:11.336 11:53:16 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:11.336 11:53:16 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:09:11.336 11:53:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:11.336 + '[' 2 -ne 2 ']' 00:09:11.336 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:11.336 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:11.336 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:11.336 +++ basename /dev/fd/62 00:09:11.336 ++ mktemp /tmp/62.XXX 00:09:11.336 + tmp_file_1=/tmp/62.13W 00:09:11.336 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:11.336 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:11.336 + tmp_file_2=/tmp/spdk_tgt_config.json.tSM 00:09:11.336 + ret=0 00:09:11.336 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:11.905 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:11.905 + diff -u /tmp/62.13W /tmp/spdk_tgt_config.json.tSM 00:09:11.905 INFO: JSON config files are the same 00:09:11.905 + echo 'INFO: JSON config files are the same' 00:09:11.905 + rm /tmp/62.13W /tmp/spdk_tgt_config.json.tSM 00:09:11.905 + exit 0 00:09:11.905 11:53:17 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:09:11.905 INFO: changing configuration and checking if this can be detected... 00:09:11.905 11:53:17 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:11.905 11:53:17 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:11.905 11:53:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:12.164 11:53:17 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:12.165 11:53:17 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:09:12.165 11:53:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:12.165 + '[' 2 -ne 2 ']' 00:09:12.165 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:12.165 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:12.165 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:12.165 +++ basename /dev/fd/62 00:09:12.165 ++ mktemp /tmp/62.XXX 00:09:12.165 + tmp_file_1=/tmp/62.lK5 00:09:12.165 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:12.165 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:12.165 + tmp_file_2=/tmp/spdk_tgt_config.json.fSF 00:09:12.165 + ret=0 00:09:12.165 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:12.423 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:12.682 + diff -u /tmp/62.lK5 /tmp/spdk_tgt_config.json.fSF 00:09:12.682 + ret=1 00:09:12.682 + echo '=== Start of file: /tmp/62.lK5 ===' 00:09:12.682 + cat /tmp/62.lK5 00:09:12.682 + echo '=== End of file: /tmp/62.lK5 ===' 00:09:12.682 + echo '' 00:09:12.682 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fSF ===' 00:09:12.682 + cat /tmp/spdk_tgt_config.json.fSF 00:09:12.682 + echo '=== End of file: /tmp/spdk_tgt_config.json.fSF ===' 00:09:12.682 + echo '' 00:09:12.682 + rm /tmp/62.lK5 /tmp/spdk_tgt_config.json.fSF 00:09:12.682 + exit 1 00:09:12.682 INFO: configuration change detected. 00:09:12.682 11:53:17 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:09:12.682 11:53:17 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:09:12.682 11:53:17 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:09:12.682 11:53:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:12.682 11:53:17 -- common/autotest_common.sh@10 -- # set +x 00:09:12.682 11:53:17 -- json_config/json_config.sh@360 -- # local ret=0 00:09:12.682 11:53:17 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:09:12.682 11:53:17 -- json_config/json_config.sh@370 -- # [[ -n 66022 ]] 00:09:12.682 11:53:17 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:09:12.682 11:53:17 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:09:12.682 11:53:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:12.682 11:53:17 -- common/autotest_common.sh@10 -- # set +x 00:09:12.682 11:53:17 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:09:12.682 11:53:17 -- json_config/json_config.sh@246 -- # uname -s 00:09:12.682 11:53:17 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:09:12.682 11:53:17 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:09:12.682 11:53:18 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:09:12.682 11:53:18 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:09:12.682 11:53:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:12.682 11:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:12.682 11:53:18 -- json_config/json_config.sh@376 -- # killprocess 66022 00:09:12.682 11:53:18 -- common/autotest_common.sh@936 -- # '[' -z 66022 ']' 00:09:12.682 11:53:18 -- common/autotest_common.sh@940 -- # kill -0 66022 00:09:12.682 11:53:18 -- common/autotest_common.sh@941 -- # uname 00:09:12.682 11:53:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:12.682 11:53:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66022 00:09:12.682 11:53:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:12.682 11:53:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:12.682 11:53:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66022' 00:09:12.682 killing process with pid 66022 00:09:12.682 11:53:18 -- common/autotest_common.sh@955 -- # kill 66022 00:09:12.682 11:53:18 -- common/autotest_common.sh@960 -- # wait 66022 00:09:12.941 11:53:18 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:12.941 11:53:18 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:09:12.941 11:53:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:12.941 11:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:12.941 11:53:18 -- json_config/json_config.sh@381 -- # return 0 00:09:12.941 11:53:18 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:09:12.941 INFO: Success 00:09:12.941 00:09:12.941 real 0m9.493s 00:09:12.941 user 0m13.317s 00:09:12.941 sys 0m2.124s 00:09:12.941 11:53:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:12.941 11:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:12.941 ************************************ 00:09:12.941 END TEST json_config 00:09:12.941 ************************************ 00:09:13.201 11:53:18 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:13.201 11:53:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:13.201 11:53:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.201 11:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:13.201 ************************************ 00:09:13.201 START TEST json_config_extra_key 00:09:13.201 ************************************ 00:09:13.201 11:53:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:13.201 11:53:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:13.201 11:53:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:13.201 11:53:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:13.202 11:53:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:13.202 11:53:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:13.202 11:53:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:13.202 11:53:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:13.202 11:53:18 -- scripts/common.sh@335 -- # IFS=.-: 00:09:13.202 11:53:18 -- scripts/common.sh@335 -- # read -ra ver1 00:09:13.202 11:53:18 -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.202 11:53:18 -- scripts/common.sh@336 -- # read -ra ver2 00:09:13.202 11:53:18 -- scripts/common.sh@337 -- # local 'op=<' 00:09:13.202 11:53:18 -- scripts/common.sh@339 -- # ver1_l=2 00:09:13.202 11:53:18 -- scripts/common.sh@340 -- # ver2_l=1 00:09:13.202 11:53:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:13.202 11:53:18 -- scripts/common.sh@343 -- # case "$op" in 00:09:13.202 11:53:18 -- scripts/common.sh@344 -- # : 1 00:09:13.202 11:53:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:13.202 11:53:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.202 11:53:18 -- scripts/common.sh@364 -- # decimal 1 00:09:13.202 11:53:18 -- scripts/common.sh@352 -- # local d=1 00:09:13.202 11:53:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.202 11:53:18 -- scripts/common.sh@354 -- # echo 1 00:09:13.202 11:53:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:13.202 11:53:18 -- scripts/common.sh@365 -- # decimal 2 00:09:13.202 11:53:18 -- scripts/common.sh@352 -- # local d=2 00:09:13.202 11:53:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.202 11:53:18 -- scripts/common.sh@354 -- # echo 2 00:09:13.202 11:53:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:13.202 11:53:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:13.202 11:53:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:13.202 11:53:18 -- scripts/common.sh@367 -- # return 0 00:09:13.202 11:53:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.202 11:53:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.202 --rc genhtml_branch_coverage=1 00:09:13.202 --rc genhtml_function_coverage=1 00:09:13.202 --rc genhtml_legend=1 00:09:13.202 --rc geninfo_all_blocks=1 00:09:13.202 --rc geninfo_unexecuted_blocks=1 00:09:13.202 00:09:13.202 ' 00:09:13.202 11:53:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.202 --rc genhtml_branch_coverage=1 00:09:13.202 --rc genhtml_function_coverage=1 00:09:13.202 --rc genhtml_legend=1 00:09:13.202 --rc geninfo_all_blocks=1 00:09:13.202 --rc geninfo_unexecuted_blocks=1 00:09:13.202 00:09:13.202 ' 00:09:13.202 11:53:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.202 --rc genhtml_branch_coverage=1 00:09:13.202 --rc genhtml_function_coverage=1 00:09:13.202 --rc genhtml_legend=1 00:09:13.202 --rc geninfo_all_blocks=1 00:09:13.202 --rc geninfo_unexecuted_blocks=1 00:09:13.202 00:09:13.202 ' 00:09:13.202 11:53:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.202 --rc genhtml_branch_coverage=1 00:09:13.202 --rc genhtml_function_coverage=1 00:09:13.202 --rc genhtml_legend=1 00:09:13.202 --rc geninfo_all_blocks=1 00:09:13.202 --rc geninfo_unexecuted_blocks=1 00:09:13.202 00:09:13.202 ' 00:09:13.202 11:53:18 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:13.202 11:53:18 -- nvmf/common.sh@7 -- # uname -s 00:09:13.202 11:53:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.202 11:53:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.202 11:53:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.202 11:53:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.202 11:53:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.202 11:53:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.202 11:53:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.202 11:53:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.202 11:53:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.202 11:53:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.202 11:53:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:09:13.202 11:53:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:09:13.202 11:53:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.202 11:53:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.202 11:53:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:13.202 11:53:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.202 11:53:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.202 11:53:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.202 11:53:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.202 11:53:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.202 11:53:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.202 11:53:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.202 11:53:18 -- paths/export.sh@5 -- # export PATH 00:09:13.202 11:53:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.202 11:53:18 -- nvmf/common.sh@46 -- # : 0 00:09:13.202 11:53:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:13.202 11:53:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:13.202 11:53:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:13.202 11:53:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.202 11:53:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.202 11:53:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:13.202 11:53:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:13.202 11:53:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:13.202 11:53:18 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:09:13.202 11:53:18 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:09:13.202 11:53:18 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:13.202 11:53:18 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:09:13.202 11:53:18 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:13.202 11:53:18 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:09:13.202 11:53:18 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:13.202 11:53:18 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:09:13.202 11:53:18 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:13.202 INFO: launching applications... 00:09:13.202 11:53:18 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:09:13.202 11:53:18 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:13.202 11:53:18 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:09:13.203 11:53:18 -- json_config/json_config_extra_key.sh@25 -- # shift 00:09:13.203 11:53:18 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:09:13.203 11:53:18 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:09:13.203 11:53:18 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=66188 00:09:13.203 Waiting for target to run... 00:09:13.203 11:53:18 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:09:13.203 11:53:18 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:13.203 11:53:18 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 66188 /var/tmp/spdk_tgt.sock 00:09:13.203 11:53:18 -- common/autotest_common.sh@829 -- # '[' -z 66188 ']' 00:09:13.203 11:53:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:13.203 11:53:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:13.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:13.203 11:53:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:13.203 11:53:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:13.203 11:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:13.462 [2024-11-29 11:53:18.733941] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:13.462 [2024-11-29 11:53:18.734081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66188 ] 00:09:14.030 [2024-11-29 11:53:19.264313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.030 [2024-11-29 11:53:19.349965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:14.030 [2024-11-29 11:53:19.350152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.289 11:53:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:14.289 11:53:19 -- common/autotest_common.sh@862 -- # return 0 00:09:14.289 00:09:14.289 11:53:19 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:09:14.289 INFO: shutting down applications... 00:09:14.289 11:53:19 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:09:14.289 11:53:19 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:09:14.289 11:53:19 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:09:14.289 11:53:19 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:09:14.289 11:53:19 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 66188 ]] 00:09:14.289 11:53:19 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 66188 00:09:14.289 11:53:19 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:09:14.289 11:53:19 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:14.289 11:53:19 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66188 00:09:14.289 11:53:19 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:14.856 11:53:20 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:14.857 11:53:20 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:14.857 11:53:20 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66188 00:09:14.857 11:53:20 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:15.425 11:53:20 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:15.425 11:53:20 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:15.425 11:53:20 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66188 00:09:15.425 11:53:20 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:09:15.425 11:53:20 -- json_config/json_config_extra_key.sh@52 -- # break 00:09:15.425 11:53:20 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:09:15.425 SPDK target shutdown done 00:09:15.425 11:53:20 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:09:15.425 Success 00:09:15.425 11:53:20 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:09:15.425 00:09:15.425 real 0m2.314s 00:09:15.425 user 0m1.842s 00:09:15.425 sys 0m0.560s 00:09:15.425 11:53:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:15.425 11:53:20 -- common/autotest_common.sh@10 -- # set +x 00:09:15.425 ************************************ 00:09:15.425 END TEST json_config_extra_key 00:09:15.425 ************************************ 00:09:15.425 11:53:20 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:15.425 11:53:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:15.425 11:53:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:15.425 11:53:20 -- common/autotest_common.sh@10 -- # set +x 00:09:15.425 ************************************ 00:09:15.425 START TEST alias_rpc 00:09:15.425 ************************************ 00:09:15.425 11:53:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:15.425 * Looking for test storage... 00:09:15.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:15.685 11:53:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:15.685 11:53:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:15.685 11:53:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:15.685 11:53:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:15.685 11:53:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:15.685 11:53:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:15.685 11:53:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:15.685 11:53:21 -- scripts/common.sh@335 -- # IFS=.-: 00:09:15.685 11:53:21 -- scripts/common.sh@335 -- # read -ra ver1 00:09:15.685 11:53:21 -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.685 11:53:21 -- scripts/common.sh@336 -- # read -ra ver2 00:09:15.685 11:53:21 -- scripts/common.sh@337 -- # local 'op=<' 00:09:15.685 11:53:21 -- scripts/common.sh@339 -- # ver1_l=2 00:09:15.685 11:53:21 -- scripts/common.sh@340 -- # ver2_l=1 00:09:15.685 11:53:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:15.685 11:53:21 -- scripts/common.sh@343 -- # case "$op" in 00:09:15.685 11:53:21 -- scripts/common.sh@344 -- # : 1 00:09:15.685 11:53:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:15.685 11:53:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.685 11:53:21 -- scripts/common.sh@364 -- # decimal 1 00:09:15.685 11:53:21 -- scripts/common.sh@352 -- # local d=1 00:09:15.685 11:53:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.685 11:53:21 -- scripts/common.sh@354 -- # echo 1 00:09:15.685 11:53:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:15.685 11:53:21 -- scripts/common.sh@365 -- # decimal 2 00:09:15.685 11:53:21 -- scripts/common.sh@352 -- # local d=2 00:09:15.685 11:53:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.685 11:53:21 -- scripts/common.sh@354 -- # echo 2 00:09:15.685 11:53:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:15.685 11:53:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:15.685 11:53:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:15.685 11:53:21 -- scripts/common.sh@367 -- # return 0 00:09:15.685 11:53:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.685 11:53:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:15.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.685 --rc genhtml_branch_coverage=1 00:09:15.685 --rc genhtml_function_coverage=1 00:09:15.685 --rc genhtml_legend=1 00:09:15.685 --rc geninfo_all_blocks=1 00:09:15.685 --rc geninfo_unexecuted_blocks=1 00:09:15.685 00:09:15.685 ' 00:09:15.685 11:53:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:15.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.685 --rc genhtml_branch_coverage=1 00:09:15.685 --rc genhtml_function_coverage=1 00:09:15.685 --rc genhtml_legend=1 00:09:15.685 --rc geninfo_all_blocks=1 00:09:15.685 --rc geninfo_unexecuted_blocks=1 00:09:15.685 00:09:15.685 ' 00:09:15.685 11:53:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:15.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.685 --rc genhtml_branch_coverage=1 00:09:15.685 --rc genhtml_function_coverage=1 00:09:15.685 --rc genhtml_legend=1 00:09:15.685 --rc geninfo_all_blocks=1 00:09:15.685 --rc geninfo_unexecuted_blocks=1 00:09:15.685 00:09:15.685 ' 00:09:15.685 11:53:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:15.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.685 --rc genhtml_branch_coverage=1 00:09:15.685 --rc genhtml_function_coverage=1 00:09:15.685 --rc genhtml_legend=1 00:09:15.685 --rc geninfo_all_blocks=1 00:09:15.685 --rc geninfo_unexecuted_blocks=1 00:09:15.685 00:09:15.685 ' 00:09:15.685 11:53:21 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:15.685 11:53:21 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66266 00:09:15.685 11:53:21 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:15.685 11:53:21 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66266 00:09:15.685 11:53:21 -- common/autotest_common.sh@829 -- # '[' -z 66266 ']' 00:09:15.685 11:53:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.685 11:53:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.685 11:53:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.685 11:53:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.685 11:53:21 -- common/autotest_common.sh@10 -- # set +x 00:09:15.685 [2024-11-29 11:53:21.108595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:15.685 [2024-11-29 11:53:21.108724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66266 ] 00:09:15.968 [2024-11-29 11:53:21.246830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.968 [2024-11-29 11:53:21.342241] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:15.968 [2024-11-29 11:53:21.342417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.917 11:53:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.917 11:53:22 -- common/autotest_common.sh@862 -- # return 0 00:09:16.917 11:53:22 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:16.917 11:53:22 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66266 00:09:16.917 11:53:22 -- common/autotest_common.sh@936 -- # '[' -z 66266 ']' 00:09:16.917 11:53:22 -- common/autotest_common.sh@940 -- # kill -0 66266 00:09:16.917 11:53:22 -- common/autotest_common.sh@941 -- # uname 00:09:16.917 11:53:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:16.917 11:53:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66266 00:09:17.176 11:53:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:17.176 11:53:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:17.176 killing process with pid 66266 00:09:17.176 11:53:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66266' 00:09:17.176 11:53:22 -- common/autotest_common.sh@955 -- # kill 66266 00:09:17.176 11:53:22 -- common/autotest_common.sh@960 -- # wait 66266 00:09:17.746 00:09:17.746 real 0m2.151s 00:09:17.746 user 0m2.343s 00:09:17.746 sys 0m0.554s 00:09:17.746 11:53:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:17.746 11:53:23 -- common/autotest_common.sh@10 -- # set +x 00:09:17.746 ************************************ 00:09:17.746 END TEST alias_rpc 00:09:17.746 ************************************ 00:09:17.746 11:53:23 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:09:17.746 11:53:23 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:17.746 11:53:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:17.746 11:53:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.746 11:53:23 -- common/autotest_common.sh@10 -- # set +x 00:09:17.746 ************************************ 00:09:17.746 START TEST spdkcli_tcp 00:09:17.746 ************************************ 00:09:17.746 11:53:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:17.746 * Looking for test storage... 00:09:17.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:17.746 11:53:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:17.746 11:53:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:17.746 11:53:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:17.746 11:53:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:17.746 11:53:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:17.746 11:53:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:17.746 11:53:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:17.746 11:53:23 -- scripts/common.sh@335 -- # IFS=.-: 00:09:17.746 11:53:23 -- scripts/common.sh@335 -- # read -ra ver1 00:09:17.746 11:53:23 -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.746 11:53:23 -- scripts/common.sh@336 -- # read -ra ver2 00:09:17.746 11:53:23 -- scripts/common.sh@337 -- # local 'op=<' 00:09:17.746 11:53:23 -- scripts/common.sh@339 -- # ver1_l=2 00:09:17.746 11:53:23 -- scripts/common.sh@340 -- # ver2_l=1 00:09:17.746 11:53:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:17.746 11:53:23 -- scripts/common.sh@343 -- # case "$op" in 00:09:17.746 11:53:23 -- scripts/common.sh@344 -- # : 1 00:09:17.746 11:53:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:17.746 11:53:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.746 11:53:23 -- scripts/common.sh@364 -- # decimal 1 00:09:17.746 11:53:23 -- scripts/common.sh@352 -- # local d=1 00:09:17.746 11:53:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.746 11:53:23 -- scripts/common.sh@354 -- # echo 1 00:09:17.746 11:53:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:17.746 11:53:23 -- scripts/common.sh@365 -- # decimal 2 00:09:17.746 11:53:23 -- scripts/common.sh@352 -- # local d=2 00:09:17.746 11:53:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.746 11:53:23 -- scripts/common.sh@354 -- # echo 2 00:09:17.746 11:53:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:17.746 11:53:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:17.746 11:53:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:17.746 11:53:23 -- scripts/common.sh@367 -- # return 0 00:09:17.746 11:53:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.746 11:53:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:17.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.746 --rc genhtml_branch_coverage=1 00:09:17.746 --rc genhtml_function_coverage=1 00:09:17.746 --rc genhtml_legend=1 00:09:17.746 --rc geninfo_all_blocks=1 00:09:17.746 --rc geninfo_unexecuted_blocks=1 00:09:17.746 00:09:17.746 ' 00:09:17.746 11:53:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:17.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.746 --rc genhtml_branch_coverage=1 00:09:17.746 --rc genhtml_function_coverage=1 00:09:17.747 --rc genhtml_legend=1 00:09:17.747 --rc geninfo_all_blocks=1 00:09:17.747 --rc geninfo_unexecuted_blocks=1 00:09:17.747 00:09:17.747 ' 00:09:17.747 11:53:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:17.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.747 --rc genhtml_branch_coverage=1 00:09:17.747 --rc genhtml_function_coverage=1 00:09:17.747 --rc genhtml_legend=1 00:09:17.747 --rc geninfo_all_blocks=1 00:09:17.747 --rc geninfo_unexecuted_blocks=1 00:09:17.747 00:09:17.747 ' 00:09:17.747 11:53:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:17.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.747 --rc genhtml_branch_coverage=1 00:09:17.747 --rc genhtml_function_coverage=1 00:09:17.747 --rc genhtml_legend=1 00:09:17.747 --rc geninfo_all_blocks=1 00:09:17.747 --rc geninfo_unexecuted_blocks=1 00:09:17.747 00:09:17.747 ' 00:09:17.747 11:53:23 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:17.747 11:53:23 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:17.747 11:53:23 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:17.747 11:53:23 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:17.747 11:53:23 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:17.747 11:53:23 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:17.747 11:53:23 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:17.747 11:53:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:18.004 11:53:23 -- common/autotest_common.sh@10 -- # set +x 00:09:18.004 11:53:23 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66349 00:09:18.004 11:53:23 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:18.004 11:53:23 -- spdkcli/tcp.sh@27 -- # waitforlisten 66349 00:09:18.004 11:53:23 -- common/autotest_common.sh@829 -- # '[' -z 66349 ']' 00:09:18.004 11:53:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.004 11:53:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.004 11:53:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.004 11:53:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.004 11:53:23 -- common/autotest_common.sh@10 -- # set +x 00:09:18.004 [2024-11-29 11:53:23.323673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:18.004 [2024-11-29 11:53:23.323800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66349 ] 00:09:18.004 [2024-11-29 11:53:23.463252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:18.263 [2024-11-29 11:53:23.581985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:18.263 [2024-11-29 11:53:23.582332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.263 [2024-11-29 11:53:23.582658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.198 11:53:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:19.198 11:53:24 -- common/autotest_common.sh@862 -- # return 0 00:09:19.199 11:53:24 -- spdkcli/tcp.sh@31 -- # socat_pid=66366 00:09:19.199 11:53:24 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:19.199 11:53:24 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:19.199 [ 00:09:19.199 "bdev_malloc_delete", 00:09:19.199 "bdev_malloc_create", 00:09:19.199 "bdev_null_resize", 00:09:19.199 "bdev_null_delete", 00:09:19.199 "bdev_null_create", 00:09:19.199 "bdev_nvme_cuse_unregister", 00:09:19.199 "bdev_nvme_cuse_register", 00:09:19.199 "bdev_opal_new_user", 00:09:19.199 "bdev_opal_set_lock_state", 00:09:19.199 "bdev_opal_delete", 00:09:19.199 "bdev_opal_get_info", 00:09:19.199 "bdev_opal_create", 00:09:19.199 "bdev_nvme_opal_revert", 00:09:19.199 "bdev_nvme_opal_init", 00:09:19.199 "bdev_nvme_send_cmd", 00:09:19.199 "bdev_nvme_get_path_iostat", 00:09:19.199 "bdev_nvme_get_mdns_discovery_info", 00:09:19.199 "bdev_nvme_stop_mdns_discovery", 00:09:19.199 "bdev_nvme_start_mdns_discovery", 00:09:19.199 "bdev_nvme_set_multipath_policy", 00:09:19.199 "bdev_nvme_set_preferred_path", 00:09:19.199 "bdev_nvme_get_io_paths", 00:09:19.199 "bdev_nvme_remove_error_injection", 00:09:19.199 "bdev_nvme_add_error_injection", 00:09:19.199 "bdev_nvme_get_discovery_info", 00:09:19.199 "bdev_nvme_stop_discovery", 00:09:19.199 "bdev_nvme_start_discovery", 00:09:19.199 "bdev_nvme_get_controller_health_info", 00:09:19.199 "bdev_nvme_disable_controller", 00:09:19.199 "bdev_nvme_enable_controller", 00:09:19.199 "bdev_nvme_reset_controller", 00:09:19.199 "bdev_nvme_get_transport_statistics", 00:09:19.199 "bdev_nvme_apply_firmware", 00:09:19.199 "bdev_nvme_detach_controller", 00:09:19.199 "bdev_nvme_get_controllers", 00:09:19.199 "bdev_nvme_attach_controller", 00:09:19.199 "bdev_nvme_set_hotplug", 00:09:19.199 "bdev_nvme_set_options", 00:09:19.199 "bdev_passthru_delete", 00:09:19.199 "bdev_passthru_create", 00:09:19.199 "bdev_lvol_grow_lvstore", 00:09:19.199 "bdev_lvol_get_lvols", 00:09:19.199 "bdev_lvol_get_lvstores", 00:09:19.199 "bdev_lvol_delete", 00:09:19.199 "bdev_lvol_set_read_only", 00:09:19.199 "bdev_lvol_resize", 00:09:19.199 "bdev_lvol_decouple_parent", 00:09:19.199 "bdev_lvol_inflate", 00:09:19.199 "bdev_lvol_rename", 00:09:19.199 "bdev_lvol_clone_bdev", 00:09:19.199 "bdev_lvol_clone", 00:09:19.199 "bdev_lvol_snapshot", 00:09:19.199 "bdev_lvol_create", 00:09:19.199 "bdev_lvol_delete_lvstore", 00:09:19.199 "bdev_lvol_rename_lvstore", 00:09:19.199 "bdev_lvol_create_lvstore", 00:09:19.199 "bdev_raid_set_options", 00:09:19.199 "bdev_raid_remove_base_bdev", 00:09:19.199 "bdev_raid_add_base_bdev", 00:09:19.199 "bdev_raid_delete", 00:09:19.199 "bdev_raid_create", 00:09:19.199 "bdev_raid_get_bdevs", 00:09:19.199 "bdev_error_inject_error", 00:09:19.199 "bdev_error_delete", 00:09:19.199 "bdev_error_create", 00:09:19.199 "bdev_split_delete", 00:09:19.199 "bdev_split_create", 00:09:19.199 "bdev_delay_delete", 00:09:19.199 "bdev_delay_create", 00:09:19.199 "bdev_delay_update_latency", 00:09:19.199 "bdev_zone_block_delete", 00:09:19.199 "bdev_zone_block_create", 00:09:19.199 "blobfs_create", 00:09:19.199 "blobfs_detect", 00:09:19.199 "blobfs_set_cache_size", 00:09:19.199 "bdev_aio_delete", 00:09:19.199 "bdev_aio_rescan", 00:09:19.199 "bdev_aio_create", 00:09:19.199 "bdev_ftl_set_property", 00:09:19.199 "bdev_ftl_get_properties", 00:09:19.199 "bdev_ftl_get_stats", 00:09:19.199 "bdev_ftl_unmap", 00:09:19.199 "bdev_ftl_unload", 00:09:19.199 "bdev_ftl_delete", 00:09:19.199 "bdev_ftl_load", 00:09:19.199 "bdev_ftl_create", 00:09:19.199 "bdev_virtio_attach_controller", 00:09:19.199 "bdev_virtio_scsi_get_devices", 00:09:19.199 "bdev_virtio_detach_controller", 00:09:19.199 "bdev_virtio_blk_set_hotplug", 00:09:19.199 "bdev_iscsi_delete", 00:09:19.199 "bdev_iscsi_create", 00:09:19.199 "bdev_iscsi_set_options", 00:09:19.199 "bdev_uring_delete", 00:09:19.199 "bdev_uring_create", 00:09:19.199 "accel_error_inject_error", 00:09:19.199 "ioat_scan_accel_module", 00:09:19.199 "dsa_scan_accel_module", 00:09:19.199 "iaa_scan_accel_module", 00:09:19.199 "iscsi_set_options", 00:09:19.199 "iscsi_get_auth_groups", 00:09:19.199 "iscsi_auth_group_remove_secret", 00:09:19.199 "iscsi_auth_group_add_secret", 00:09:19.199 "iscsi_delete_auth_group", 00:09:19.199 "iscsi_create_auth_group", 00:09:19.199 "iscsi_set_discovery_auth", 00:09:19.199 "iscsi_get_options", 00:09:19.199 "iscsi_target_node_request_logout", 00:09:19.199 "iscsi_target_node_set_redirect", 00:09:19.199 "iscsi_target_node_set_auth", 00:09:19.199 "iscsi_target_node_add_lun", 00:09:19.199 "iscsi_get_connections", 00:09:19.199 "iscsi_portal_group_set_auth", 00:09:19.199 "iscsi_start_portal_group", 00:09:19.199 "iscsi_delete_portal_group", 00:09:19.199 "iscsi_create_portal_group", 00:09:19.199 "iscsi_get_portal_groups", 00:09:19.199 "iscsi_delete_target_node", 00:09:19.199 "iscsi_target_node_remove_pg_ig_maps", 00:09:19.199 "iscsi_target_node_add_pg_ig_maps", 00:09:19.199 "iscsi_create_target_node", 00:09:19.199 "iscsi_get_target_nodes", 00:09:19.199 "iscsi_delete_initiator_group", 00:09:19.199 "iscsi_initiator_group_remove_initiators", 00:09:19.199 "iscsi_initiator_group_add_initiators", 00:09:19.199 "iscsi_create_initiator_group", 00:09:19.199 "iscsi_get_initiator_groups", 00:09:19.199 "nvmf_set_crdt", 00:09:19.199 "nvmf_set_config", 00:09:19.199 "nvmf_set_max_subsystems", 00:09:19.199 "nvmf_subsystem_get_listeners", 00:09:19.199 "nvmf_subsystem_get_qpairs", 00:09:19.199 "nvmf_subsystem_get_controllers", 00:09:19.199 "nvmf_get_stats", 00:09:19.199 "nvmf_get_transports", 00:09:19.199 "nvmf_create_transport", 00:09:19.199 "nvmf_get_targets", 00:09:19.199 "nvmf_delete_target", 00:09:19.199 "nvmf_create_target", 00:09:19.199 "nvmf_subsystem_allow_any_host", 00:09:19.199 "nvmf_subsystem_remove_host", 00:09:19.199 "nvmf_subsystem_add_host", 00:09:19.199 "nvmf_subsystem_remove_ns", 00:09:19.199 "nvmf_subsystem_add_ns", 00:09:19.199 "nvmf_subsystem_listener_set_ana_state", 00:09:19.199 "nvmf_discovery_get_referrals", 00:09:19.199 "nvmf_discovery_remove_referral", 00:09:19.199 "nvmf_discovery_add_referral", 00:09:19.199 "nvmf_subsystem_remove_listener", 00:09:19.199 "nvmf_subsystem_add_listener", 00:09:19.199 "nvmf_delete_subsystem", 00:09:19.199 "nvmf_create_subsystem", 00:09:19.199 "nvmf_get_subsystems", 00:09:19.199 "env_dpdk_get_mem_stats", 00:09:19.199 "nbd_get_disks", 00:09:19.199 "nbd_stop_disk", 00:09:19.199 "nbd_start_disk", 00:09:19.199 "ublk_recover_disk", 00:09:19.199 "ublk_get_disks", 00:09:19.199 "ublk_stop_disk", 00:09:19.199 "ublk_start_disk", 00:09:19.199 "ublk_destroy_target", 00:09:19.199 "ublk_create_target", 00:09:19.199 "virtio_blk_create_transport", 00:09:19.199 "virtio_blk_get_transports", 00:09:19.199 "vhost_controller_set_coalescing", 00:09:19.199 "vhost_get_controllers", 00:09:19.199 "vhost_delete_controller", 00:09:19.199 "vhost_create_blk_controller", 00:09:19.199 "vhost_scsi_controller_remove_target", 00:09:19.199 "vhost_scsi_controller_add_target", 00:09:19.199 "vhost_start_scsi_controller", 00:09:19.199 "vhost_create_scsi_controller", 00:09:19.199 "thread_set_cpumask", 00:09:19.199 "framework_get_scheduler", 00:09:19.199 "framework_set_scheduler", 00:09:19.199 "framework_get_reactors", 00:09:19.199 "thread_get_io_channels", 00:09:19.199 "thread_get_pollers", 00:09:19.199 "thread_get_stats", 00:09:19.199 "framework_monitor_context_switch", 00:09:19.199 "spdk_kill_instance", 00:09:19.199 "log_enable_timestamps", 00:09:19.199 "log_get_flags", 00:09:19.199 "log_clear_flag", 00:09:19.199 "log_set_flag", 00:09:19.199 "log_get_level", 00:09:19.199 "log_set_level", 00:09:19.199 "log_get_print_level", 00:09:19.199 "log_set_print_level", 00:09:19.199 "framework_enable_cpumask_locks", 00:09:19.199 "framework_disable_cpumask_locks", 00:09:19.199 "framework_wait_init", 00:09:19.199 "framework_start_init", 00:09:19.199 "scsi_get_devices", 00:09:19.199 "bdev_get_histogram", 00:09:19.199 "bdev_enable_histogram", 00:09:19.199 "bdev_set_qos_limit", 00:09:19.199 "bdev_set_qd_sampling_period", 00:09:19.199 "bdev_get_bdevs", 00:09:19.199 "bdev_reset_iostat", 00:09:19.199 "bdev_get_iostat", 00:09:19.199 "bdev_examine", 00:09:19.199 "bdev_wait_for_examine", 00:09:19.199 "bdev_set_options", 00:09:19.199 "notify_get_notifications", 00:09:19.199 "notify_get_types", 00:09:19.199 "accel_get_stats", 00:09:19.199 "accel_set_options", 00:09:19.199 "accel_set_driver", 00:09:19.199 "accel_crypto_key_destroy", 00:09:19.199 "accel_crypto_keys_get", 00:09:19.199 "accel_crypto_key_create", 00:09:19.199 "accel_assign_opc", 00:09:19.199 "accel_get_module_info", 00:09:19.199 "accel_get_opc_assignments", 00:09:19.199 "vmd_rescan", 00:09:19.199 "vmd_remove_device", 00:09:19.199 "vmd_enable", 00:09:19.199 "sock_set_default_impl", 00:09:19.199 "sock_impl_set_options", 00:09:19.199 "sock_impl_get_options", 00:09:19.199 "iobuf_get_stats", 00:09:19.199 "iobuf_set_options", 00:09:19.199 "framework_get_pci_devices", 00:09:19.199 "framework_get_config", 00:09:19.199 "framework_get_subsystems", 00:09:19.199 "trace_get_info", 00:09:19.199 "trace_get_tpoint_group_mask", 00:09:19.199 "trace_disable_tpoint_group", 00:09:19.199 "trace_enable_tpoint_group", 00:09:19.199 "trace_clear_tpoint_mask", 00:09:19.199 "trace_set_tpoint_mask", 00:09:19.199 "spdk_get_version", 00:09:19.199 "rpc_get_methods" 00:09:19.199 ] 00:09:19.199 11:53:24 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:19.199 11:53:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:19.200 11:53:24 -- common/autotest_common.sh@10 -- # set +x 00:09:19.200 11:53:24 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:19.200 11:53:24 -- spdkcli/tcp.sh@38 -- # killprocess 66349 00:09:19.200 11:53:24 -- common/autotest_common.sh@936 -- # '[' -z 66349 ']' 00:09:19.200 11:53:24 -- common/autotest_common.sh@940 -- # kill -0 66349 00:09:19.200 11:53:24 -- common/autotest_common.sh@941 -- # uname 00:09:19.200 11:53:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:19.200 11:53:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66349 00:09:19.200 11:53:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:19.200 11:53:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:19.200 killing process with pid 66349 00:09:19.200 11:53:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66349' 00:09:19.200 11:53:24 -- common/autotest_common.sh@955 -- # kill 66349 00:09:19.200 11:53:24 -- common/autotest_common.sh@960 -- # wait 66349 00:09:20.132 00:09:20.132 real 0m2.235s 00:09:20.132 user 0m3.981s 00:09:20.132 sys 0m0.613s 00:09:20.132 11:53:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:20.132 11:53:25 -- common/autotest_common.sh@10 -- # set +x 00:09:20.132 ************************************ 00:09:20.132 END TEST spdkcli_tcp 00:09:20.132 ************************************ 00:09:20.132 11:53:25 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:20.132 11:53:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:20.132 11:53:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.132 11:53:25 -- common/autotest_common.sh@10 -- # set +x 00:09:20.132 ************************************ 00:09:20.132 START TEST dpdk_mem_utility 00:09:20.132 ************************************ 00:09:20.132 11:53:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:20.132 * Looking for test storage... 00:09:20.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:20.132 11:53:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:20.132 11:53:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:20.132 11:53:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:20.132 11:53:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:20.132 11:53:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:20.132 11:53:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:20.132 11:53:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:20.132 11:53:25 -- scripts/common.sh@335 -- # IFS=.-: 00:09:20.132 11:53:25 -- scripts/common.sh@335 -- # read -ra ver1 00:09:20.132 11:53:25 -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.132 11:53:25 -- scripts/common.sh@336 -- # read -ra ver2 00:09:20.132 11:53:25 -- scripts/common.sh@337 -- # local 'op=<' 00:09:20.132 11:53:25 -- scripts/common.sh@339 -- # ver1_l=2 00:09:20.132 11:53:25 -- scripts/common.sh@340 -- # ver2_l=1 00:09:20.132 11:53:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:20.132 11:53:25 -- scripts/common.sh@343 -- # case "$op" in 00:09:20.132 11:53:25 -- scripts/common.sh@344 -- # : 1 00:09:20.132 11:53:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:20.132 11:53:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.132 11:53:25 -- scripts/common.sh@364 -- # decimal 1 00:09:20.132 11:53:25 -- scripts/common.sh@352 -- # local d=1 00:09:20.132 11:53:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.132 11:53:25 -- scripts/common.sh@354 -- # echo 1 00:09:20.132 11:53:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:20.132 11:53:25 -- scripts/common.sh@365 -- # decimal 2 00:09:20.132 11:53:25 -- scripts/common.sh@352 -- # local d=2 00:09:20.132 11:53:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.132 11:53:25 -- scripts/common.sh@354 -- # echo 2 00:09:20.132 11:53:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:20.132 11:53:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:20.132 11:53:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:20.132 11:53:25 -- scripts/common.sh@367 -- # return 0 00:09:20.132 11:53:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.132 11:53:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:20.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.132 --rc genhtml_branch_coverage=1 00:09:20.132 --rc genhtml_function_coverage=1 00:09:20.132 --rc genhtml_legend=1 00:09:20.132 --rc geninfo_all_blocks=1 00:09:20.132 --rc geninfo_unexecuted_blocks=1 00:09:20.132 00:09:20.132 ' 00:09:20.132 11:53:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:20.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.132 --rc genhtml_branch_coverage=1 00:09:20.132 --rc genhtml_function_coverage=1 00:09:20.132 --rc genhtml_legend=1 00:09:20.132 --rc geninfo_all_blocks=1 00:09:20.132 --rc geninfo_unexecuted_blocks=1 00:09:20.132 00:09:20.132 ' 00:09:20.132 11:53:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:20.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.132 --rc genhtml_branch_coverage=1 00:09:20.132 --rc genhtml_function_coverage=1 00:09:20.132 --rc genhtml_legend=1 00:09:20.132 --rc geninfo_all_blocks=1 00:09:20.132 --rc geninfo_unexecuted_blocks=1 00:09:20.132 00:09:20.132 ' 00:09:20.132 11:53:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:20.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.132 --rc genhtml_branch_coverage=1 00:09:20.132 --rc genhtml_function_coverage=1 00:09:20.132 --rc genhtml_legend=1 00:09:20.132 --rc geninfo_all_blocks=1 00:09:20.132 --rc geninfo_unexecuted_blocks=1 00:09:20.132 00:09:20.132 ' 00:09:20.132 11:53:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:20.132 11:53:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66447 00:09:20.132 11:53:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:20.132 11:53:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66447 00:09:20.132 11:53:25 -- common/autotest_common.sh@829 -- # '[' -z 66447 ']' 00:09:20.132 11:53:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.132 11:53:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.132 11:53:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.132 11:53:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.132 11:53:25 -- common/autotest_common.sh@10 -- # set +x 00:09:20.132 [2024-11-29 11:53:25.617425] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:20.132 [2024-11-29 11:53:25.617582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66447 ] 00:09:20.391 [2024-11-29 11:53:25.756735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.391 [2024-11-29 11:53:25.889029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:20.391 [2024-11-29 11:53:25.889230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.328 11:53:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.328 11:53:26 -- common/autotest_common.sh@862 -- # return 0 00:09:21.328 11:53:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:21.328 11:53:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:21.328 11:53:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.328 11:53:26 -- common/autotest_common.sh@10 -- # set +x 00:09:21.328 { 00:09:21.328 "filename": "/tmp/spdk_mem_dump.txt" 00:09:21.328 } 00:09:21.328 11:53:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.328 11:53:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:21.328 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:21.328 1 heaps totaling size 814.000000 MiB 00:09:21.328 size: 814.000000 MiB heap id: 0 00:09:21.328 end heaps---------- 00:09:21.328 8 mempools totaling size 598.116089 MiB 00:09:21.328 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:21.328 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:21.328 size: 84.521057 MiB name: bdev_io_66447 00:09:21.328 size: 51.011292 MiB name: evtpool_66447 00:09:21.328 size: 50.003479 MiB name: msgpool_66447 00:09:21.328 size: 21.763794 MiB name: PDU_Pool 00:09:21.328 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:21.328 size: 0.026123 MiB name: Session_Pool 00:09:21.328 end mempools------- 00:09:21.328 6 memzones totaling size 4.142822 MiB 00:09:21.328 size: 1.000366 MiB name: RG_ring_0_66447 00:09:21.328 size: 1.000366 MiB name: RG_ring_1_66447 00:09:21.328 size: 1.000366 MiB name: RG_ring_4_66447 00:09:21.328 size: 1.000366 MiB name: RG_ring_5_66447 00:09:21.328 size: 0.125366 MiB name: RG_ring_2_66447 00:09:21.328 size: 0.015991 MiB name: RG_ring_3_66447 00:09:21.328 end memzones------- 00:09:21.328 11:53:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:21.328 heap id: 0 total size: 814.000000 MiB number of busy elements: 303 number of free elements: 15 00:09:21.328 list of free elements. size: 12.471375 MiB 00:09:21.328 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:21.328 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:21.328 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:21.328 element at address: 0x200003e00000 with size: 0.996277 MiB 00:09:21.328 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:21.328 element at address: 0x200013800000 with size: 0.978699 MiB 00:09:21.328 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:21.328 element at address: 0x200019200000 with size: 0.936584 MiB 00:09:21.328 element at address: 0x200000200000 with size: 0.832825 MiB 00:09:21.328 element at address: 0x20001aa00000 with size: 0.569153 MiB 00:09:21.328 element at address: 0x20000b200000 with size: 0.488892 MiB 00:09:21.328 element at address: 0x200000800000 with size: 0.486145 MiB 00:09:21.328 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:21.328 element at address: 0x200027e00000 with size: 0.395752 MiB 00:09:21.328 element at address: 0x200003a00000 with size: 0.347839 MiB 00:09:21.328 list of standard malloc elements. size: 199.266052 MiB 00:09:21.328 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:21.328 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:21.328 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:21.328 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:21.328 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:21.328 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:21.328 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:21.328 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:21.328 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:09:21.328 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:21.328 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:21.328 element at address: 0x20000087c740 with size: 0.000183 MiB 00:09:21.328 element at address: 0x20000087c800 with size: 0.000183 MiB 00:09:21.328 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x20000087c980 with size: 0.000183 MiB 00:09:21.328 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:09:21.328 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:09:21.328 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:09:21.328 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59180 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59240 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59300 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59480 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59540 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59600 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59780 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59840 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59900 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003adb300 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003adb500 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:21.329 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:09:21.329 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:09:21.329 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:09:21.329 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:09:21.329 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:21.330 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e65500 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:21.330 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:21.331 list of memzone associated elements. size: 602.262573 MiB 00:09:21.331 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:21.331 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:21.331 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:21.331 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:21.331 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:21.331 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66447_0 00:09:21.331 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:21.331 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66447_0 00:09:21.331 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:21.331 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66447_0 00:09:21.331 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:21.331 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:21.331 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:21.331 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:21.331 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:21.331 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66447 00:09:21.331 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:21.331 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66447 00:09:21.331 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:21.331 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66447 00:09:21.331 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:21.331 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:21.331 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:21.331 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:21.331 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:21.331 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:21.331 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:21.331 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:21.331 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:21.331 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66447 00:09:21.331 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:21.331 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66447 00:09:21.331 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:21.331 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66447 00:09:21.331 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:21.331 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66447 00:09:21.331 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:09:21.331 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66447 00:09:21.331 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:09:21.331 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:21.331 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:21.331 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:21.331 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:21.331 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:21.331 element at address: 0x200003adf880 with size: 0.125488 MiB 00:09:21.331 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66447 00:09:21.331 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:21.331 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:21.331 element at address: 0x200027e65680 with size: 0.023743 MiB 00:09:21.331 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:21.331 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:09:21.331 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66447 00:09:21.331 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:09:21.331 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:21.331 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:09:21.331 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66447 00:09:21.331 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:09:21.331 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66447 00:09:21.331 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:09:21.331 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:21.331 11:53:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:21.331 11:53:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66447 00:09:21.331 11:53:26 -- common/autotest_common.sh@936 -- # '[' -z 66447 ']' 00:09:21.331 11:53:26 -- common/autotest_common.sh@940 -- # kill -0 66447 00:09:21.331 11:53:26 -- common/autotest_common.sh@941 -- # uname 00:09:21.331 11:53:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:21.331 11:53:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66447 00:09:21.590 11:53:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:21.590 11:53:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:21.590 killing process with pid 66447 00:09:21.590 11:53:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66447' 00:09:21.590 11:53:26 -- common/autotest_common.sh@955 -- # kill 66447 00:09:21.590 11:53:26 -- common/autotest_common.sh@960 -- # wait 66447 00:09:22.160 00:09:22.160 real 0m2.016s 00:09:22.160 user 0m2.070s 00:09:22.160 sys 0m0.591s 00:09:22.160 11:53:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:22.160 11:53:27 -- common/autotest_common.sh@10 -- # set +x 00:09:22.160 ************************************ 00:09:22.160 END TEST dpdk_mem_utility 00:09:22.160 ************************************ 00:09:22.160 11:53:27 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:22.160 11:53:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.160 11:53:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.160 11:53:27 -- common/autotest_common.sh@10 -- # set +x 00:09:22.160 ************************************ 00:09:22.160 START TEST event 00:09:22.160 ************************************ 00:09:22.160 11:53:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:22.160 * Looking for test storage... 00:09:22.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:22.160 11:53:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:22.160 11:53:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:22.160 11:53:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:22.160 11:53:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:22.160 11:53:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:22.160 11:53:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:22.160 11:53:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:22.160 11:53:27 -- scripts/common.sh@335 -- # IFS=.-: 00:09:22.160 11:53:27 -- scripts/common.sh@335 -- # read -ra ver1 00:09:22.160 11:53:27 -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.160 11:53:27 -- scripts/common.sh@336 -- # read -ra ver2 00:09:22.160 11:53:27 -- scripts/common.sh@337 -- # local 'op=<' 00:09:22.160 11:53:27 -- scripts/common.sh@339 -- # ver1_l=2 00:09:22.160 11:53:27 -- scripts/common.sh@340 -- # ver2_l=1 00:09:22.160 11:53:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:22.160 11:53:27 -- scripts/common.sh@343 -- # case "$op" in 00:09:22.160 11:53:27 -- scripts/common.sh@344 -- # : 1 00:09:22.160 11:53:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:22.160 11:53:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.160 11:53:27 -- scripts/common.sh@364 -- # decimal 1 00:09:22.160 11:53:27 -- scripts/common.sh@352 -- # local d=1 00:09:22.160 11:53:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.160 11:53:27 -- scripts/common.sh@354 -- # echo 1 00:09:22.160 11:53:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:22.160 11:53:27 -- scripts/common.sh@365 -- # decimal 2 00:09:22.160 11:53:27 -- scripts/common.sh@352 -- # local d=2 00:09:22.160 11:53:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.160 11:53:27 -- scripts/common.sh@354 -- # echo 2 00:09:22.160 11:53:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:22.160 11:53:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:22.160 11:53:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:22.160 11:53:27 -- scripts/common.sh@367 -- # return 0 00:09:22.160 11:53:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.160 11:53:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:22.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.160 --rc genhtml_branch_coverage=1 00:09:22.160 --rc genhtml_function_coverage=1 00:09:22.160 --rc genhtml_legend=1 00:09:22.160 --rc geninfo_all_blocks=1 00:09:22.160 --rc geninfo_unexecuted_blocks=1 00:09:22.160 00:09:22.160 ' 00:09:22.160 11:53:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:22.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.160 --rc genhtml_branch_coverage=1 00:09:22.160 --rc genhtml_function_coverage=1 00:09:22.160 --rc genhtml_legend=1 00:09:22.160 --rc geninfo_all_blocks=1 00:09:22.160 --rc geninfo_unexecuted_blocks=1 00:09:22.160 00:09:22.160 ' 00:09:22.160 11:53:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:22.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.160 --rc genhtml_branch_coverage=1 00:09:22.160 --rc genhtml_function_coverage=1 00:09:22.160 --rc genhtml_legend=1 00:09:22.160 --rc geninfo_all_blocks=1 00:09:22.160 --rc geninfo_unexecuted_blocks=1 00:09:22.160 00:09:22.160 ' 00:09:22.160 11:53:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:22.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.160 --rc genhtml_branch_coverage=1 00:09:22.160 --rc genhtml_function_coverage=1 00:09:22.160 --rc genhtml_legend=1 00:09:22.160 --rc geninfo_all_blocks=1 00:09:22.160 --rc geninfo_unexecuted_blocks=1 00:09:22.160 00:09:22.160 ' 00:09:22.160 11:53:27 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:22.160 11:53:27 -- bdev/nbd_common.sh@6 -- # set -e 00:09:22.160 11:53:27 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:22.160 11:53:27 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:22.160 11:53:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.160 11:53:27 -- common/autotest_common.sh@10 -- # set +x 00:09:22.160 ************************************ 00:09:22.160 START TEST event_perf 00:09:22.160 ************************************ 00:09:22.160 11:53:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:22.160 Running I/O for 1 seconds...[2024-11-29 11:53:27.644502] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:22.160 [2024-11-29 11:53:27.644657] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66537 ] 00:09:22.419 [2024-11-29 11:53:27.782798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.419 [2024-11-29 11:53:27.907280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.419 [2024-11-29 11:53:27.907470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.419 [2024-11-29 11:53:27.907657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.419 Running I/O for 1 seconds...[2024-11-29 11:53:27.907656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.797 00:09:23.797 lcore 0: 113373 00:09:23.797 lcore 1: 113376 00:09:23.797 lcore 2: 113379 00:09:23.797 lcore 3: 113371 00:09:23.797 done. 00:09:23.797 00:09:23.797 real 0m1.387s 00:09:23.797 user 0m4.179s 00:09:23.797 sys 0m0.083s 00:09:23.797 11:53:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.797 11:53:29 -- common/autotest_common.sh@10 -- # set +x 00:09:23.797 ************************************ 00:09:23.797 END TEST event_perf 00:09:23.797 ************************************ 00:09:23.797 11:53:29 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:23.797 11:53:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:23.797 11:53:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.797 11:53:29 -- common/autotest_common.sh@10 -- # set +x 00:09:23.797 ************************************ 00:09:23.797 START TEST event_reactor 00:09:23.797 ************************************ 00:09:23.797 11:53:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:23.797 [2024-11-29 11:53:29.086787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:23.797 [2024-11-29 11:53:29.087448] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66575 ] 00:09:23.797 [2024-11-29 11:53:29.225882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.056 [2024-11-29 11:53:29.347924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.995 test_start 00:09:24.995 oneshot 00:09:24.995 tick 100 00:09:24.995 tick 100 00:09:24.995 tick 250 00:09:24.995 tick 100 00:09:24.995 tick 100 00:09:24.995 tick 100 00:09:24.995 tick 250 00:09:24.995 tick 500 00:09:24.995 tick 100 00:09:24.995 tick 100 00:09:24.995 tick 250 00:09:24.995 tick 100 00:09:24.995 tick 100 00:09:24.995 test_end 00:09:24.995 00:09:24.995 real 0m1.379s 00:09:24.995 user 0m1.192s 00:09:24.995 sys 0m0.079s 00:09:24.995 11:53:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:24.995 11:53:30 -- common/autotest_common.sh@10 -- # set +x 00:09:24.995 ************************************ 00:09:24.995 END TEST event_reactor 00:09:24.995 ************************************ 00:09:24.995 11:53:30 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:24.995 11:53:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:24.995 11:53:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.995 11:53:30 -- common/autotest_common.sh@10 -- # set +x 00:09:24.995 ************************************ 00:09:24.995 START TEST event_reactor_perf 00:09:24.995 ************************************ 00:09:24.995 11:53:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:25.254 [2024-11-29 11:53:30.520126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:25.254 [2024-11-29 11:53:30.520237] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66605 ] 00:09:25.254 [2024-11-29 11:53:30.657631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.513 [2024-11-29 11:53:30.782800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.449 test_start 00:09:26.449 test_end 00:09:26.449 Performance: 408838 events per second 00:09:26.449 00:09:26.449 real 0m1.373s 00:09:26.449 user 0m1.189s 00:09:26.449 sys 0m0.076s 00:09:26.449 11:53:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:26.449 ************************************ 00:09:26.449 11:53:31 -- common/autotest_common.sh@10 -- # set +x 00:09:26.449 END TEST event_reactor_perf 00:09:26.449 ************************************ 00:09:26.449 11:53:31 -- event/event.sh@49 -- # uname -s 00:09:26.449 11:53:31 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:26.449 11:53:31 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:26.449 11:53:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:26.449 11:53:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.449 11:53:31 -- common/autotest_common.sh@10 -- # set +x 00:09:26.449 ************************************ 00:09:26.449 START TEST event_scheduler 00:09:26.449 ************************************ 00:09:26.449 11:53:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:26.709 * Looking for test storage... 00:09:26.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:26.709 11:53:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:26.709 11:53:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:26.709 11:53:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:26.709 11:53:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:26.709 11:53:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:26.709 11:53:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:26.709 11:53:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:26.709 11:53:32 -- scripts/common.sh@335 -- # IFS=.-: 00:09:26.709 11:53:32 -- scripts/common.sh@335 -- # read -ra ver1 00:09:26.709 11:53:32 -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.709 11:53:32 -- scripts/common.sh@336 -- # read -ra ver2 00:09:26.709 11:53:32 -- scripts/common.sh@337 -- # local 'op=<' 00:09:26.709 11:53:32 -- scripts/common.sh@339 -- # ver1_l=2 00:09:26.709 11:53:32 -- scripts/common.sh@340 -- # ver2_l=1 00:09:26.709 11:53:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:26.709 11:53:32 -- scripts/common.sh@343 -- # case "$op" in 00:09:26.709 11:53:32 -- scripts/common.sh@344 -- # : 1 00:09:26.709 11:53:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:26.709 11:53:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.709 11:53:32 -- scripts/common.sh@364 -- # decimal 1 00:09:26.709 11:53:32 -- scripts/common.sh@352 -- # local d=1 00:09:26.709 11:53:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.709 11:53:32 -- scripts/common.sh@354 -- # echo 1 00:09:26.709 11:53:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:26.709 11:53:32 -- scripts/common.sh@365 -- # decimal 2 00:09:26.709 11:53:32 -- scripts/common.sh@352 -- # local d=2 00:09:26.709 11:53:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.709 11:53:32 -- scripts/common.sh@354 -- # echo 2 00:09:26.709 11:53:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:26.709 11:53:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:26.709 11:53:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:26.709 11:53:32 -- scripts/common.sh@367 -- # return 0 00:09:26.709 11:53:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.709 11:53:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:26.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.709 --rc genhtml_branch_coverage=1 00:09:26.709 --rc genhtml_function_coverage=1 00:09:26.709 --rc genhtml_legend=1 00:09:26.709 --rc geninfo_all_blocks=1 00:09:26.709 --rc geninfo_unexecuted_blocks=1 00:09:26.709 00:09:26.709 ' 00:09:26.709 11:53:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:26.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.709 --rc genhtml_branch_coverage=1 00:09:26.709 --rc genhtml_function_coverage=1 00:09:26.709 --rc genhtml_legend=1 00:09:26.709 --rc geninfo_all_blocks=1 00:09:26.709 --rc geninfo_unexecuted_blocks=1 00:09:26.709 00:09:26.709 ' 00:09:26.709 11:53:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:26.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.709 --rc genhtml_branch_coverage=1 00:09:26.709 --rc genhtml_function_coverage=1 00:09:26.709 --rc genhtml_legend=1 00:09:26.709 --rc geninfo_all_blocks=1 00:09:26.709 --rc geninfo_unexecuted_blocks=1 00:09:26.709 00:09:26.709 ' 00:09:26.709 11:53:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:26.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.709 --rc genhtml_branch_coverage=1 00:09:26.709 --rc genhtml_function_coverage=1 00:09:26.709 --rc genhtml_legend=1 00:09:26.709 --rc geninfo_all_blocks=1 00:09:26.709 --rc geninfo_unexecuted_blocks=1 00:09:26.709 00:09:26.709 ' 00:09:26.709 11:53:32 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:26.709 11:53:32 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66679 00:09:26.709 11:53:32 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:26.709 11:53:32 -- scheduler/scheduler.sh@37 -- # waitforlisten 66679 00:09:26.709 11:53:32 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:26.709 11:53:32 -- common/autotest_common.sh@829 -- # '[' -z 66679 ']' 00:09:26.709 11:53:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.709 11:53:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:26.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.709 11:53:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.709 11:53:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:26.709 11:53:32 -- common/autotest_common.sh@10 -- # set +x 00:09:26.709 [2024-11-29 11:53:32.167481] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:26.709 [2024-11-29 11:53:32.167655] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66679 ] 00:09:26.970 [2024-11-29 11:53:32.309900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.970 [2024-11-29 11:53:32.452997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.970 [2024-11-29 11:53:32.453124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.970 [2024-11-29 11:53:32.453303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.970 [2024-11-29 11:53:32.453311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.908 11:53:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:27.908 11:53:33 -- common/autotest_common.sh@862 -- # return 0 00:09:27.908 11:53:33 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:27.908 11:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.908 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:09:27.908 POWER: Env isn't set yet! 00:09:27.908 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:27.908 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:27.908 POWER: Cannot set governor of lcore 0 to userspace 00:09:27.908 POWER: Attempting to initialise PSTAT power management... 00:09:27.908 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:27.908 POWER: Cannot set governor of lcore 0 to performance 00:09:27.908 POWER: Attempting to initialise CPPC power management... 00:09:27.908 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:27.908 POWER: Cannot set governor of lcore 0 to userspace 00:09:27.908 POWER: Attempting to initialise VM power management... 00:09:27.908 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:27.908 POWER: Unable to set Power Management Environment for lcore 0 00:09:27.908 [2024-11-29 11:53:33.214938] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:09:27.908 [2024-11-29 11:53:33.214959] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:09:27.908 [2024-11-29 11:53:33.214967] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:09:27.908 [2024-11-29 11:53:33.214981] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:27.908 [2024-11-29 11:53:33.214988] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:27.908 [2024-11-29 11:53:33.214995] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:27.908 11:53:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.908 11:53:33 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:27.908 11:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.908 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:09:27.908 [2024-11-29 11:53:33.314979] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:27.908 11:53:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.908 11:53:33 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:27.908 11:53:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:27.908 11:53:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.908 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:09:27.908 ************************************ 00:09:27.908 START TEST scheduler_create_thread 00:09:27.908 ************************************ 00:09:27.908 11:53:33 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:09:27.908 11:53:33 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:27.908 11:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.908 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:09:27.908 2 00:09:27.908 11:53:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.908 11:53:33 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:27.908 11:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.908 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:09:27.908 3 00:09:27.908 11:53:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.908 11:53:33 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:27.908 11:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.908 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:09:27.908 4 00:09:27.908 11:53:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.909 11:53:33 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:27.909 11:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.909 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:09:27.909 5 00:09:27.909 11:53:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.909 11:53:33 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:27.909 11:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.909 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:09:27.909 6 00:09:27.909 11:53:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.909 11:53:33 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:27.909 11:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.909 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:09:27.909 7 00:09:27.909 11:53:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.909 11:53:33 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:27.909 11:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.909 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:09:27.909 8 00:09:27.909 11:53:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.909 11:53:33 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:27.909 11:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.909 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:09:27.909 9 00:09:27.909 11:53:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.909 11:53:33 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:27.909 11:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.909 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:09:27.909 10 00:09:27.909 11:53:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.909 11:53:33 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:27.909 11:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.909 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:09:27.909 11:53:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.168 11:53:33 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:28.168 11:53:33 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:28.168 11:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.168 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:09:28.168 11:53:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.168 11:53:33 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:28.168 11:53:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.168 11:53:33 -- common/autotest_common.sh@10 -- # set +x 00:09:29.561 11:53:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.561 11:53:34 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:29.561 11:53:34 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:29.561 11:53:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.561 11:53:34 -- common/autotest_common.sh@10 -- # set +x 00:09:30.496 11:53:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.496 00:09:30.496 real 0m2.612s 00:09:30.496 user 0m0.019s 00:09:30.496 sys 0m0.008s 00:09:30.496 11:53:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:30.496 11:53:35 -- common/autotest_common.sh@10 -- # set +x 00:09:30.496 ************************************ 00:09:30.496 END TEST scheduler_create_thread 00:09:30.496 ************************************ 00:09:30.496 11:53:35 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:30.496 11:53:35 -- scheduler/scheduler.sh@46 -- # killprocess 66679 00:09:30.496 11:53:35 -- common/autotest_common.sh@936 -- # '[' -z 66679 ']' 00:09:30.496 11:53:35 -- common/autotest_common.sh@940 -- # kill -0 66679 00:09:30.496 11:53:35 -- common/autotest_common.sh@941 -- # uname 00:09:30.496 11:53:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:30.496 11:53:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66679 00:09:30.755 11:53:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:09:30.755 11:53:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:09:30.755 11:53:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66679' 00:09:30.755 killing process with pid 66679 00:09:30.755 11:53:36 -- common/autotest_common.sh@955 -- # kill 66679 00:09:30.755 11:53:36 -- common/autotest_common.sh@960 -- # wait 66679 00:09:31.014 [2024-11-29 11:53:36.420150] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:31.273 00:09:31.273 real 0m4.727s 00:09:31.273 user 0m8.876s 00:09:31.273 sys 0m0.428s 00:09:31.273 11:53:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:31.273 ************************************ 00:09:31.273 END TEST event_scheduler 00:09:31.273 ************************************ 00:09:31.273 11:53:36 -- common/autotest_common.sh@10 -- # set +x 00:09:31.273 11:53:36 -- event/event.sh@51 -- # modprobe -n nbd 00:09:31.273 11:53:36 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:31.273 11:53:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:31.273 11:53:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:31.273 11:53:36 -- common/autotest_common.sh@10 -- # set +x 00:09:31.273 ************************************ 00:09:31.273 START TEST app_repeat 00:09:31.273 ************************************ 00:09:31.273 11:53:36 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:09:31.273 11:53:36 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.273 11:53:36 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.273 11:53:36 -- event/event.sh@13 -- # local nbd_list 00:09:31.273 11:53:36 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:31.273 11:53:36 -- event/event.sh@14 -- # local bdev_list 00:09:31.273 11:53:36 -- event/event.sh@15 -- # local repeat_times=4 00:09:31.273 11:53:36 -- event/event.sh@17 -- # modprobe nbd 00:09:31.273 11:53:36 -- event/event.sh@19 -- # repeat_pid=66773 00:09:31.273 11:53:36 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:31.273 11:53:36 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:31.273 Process app_repeat pid: 66773 00:09:31.273 11:53:36 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 66773' 00:09:31.273 11:53:36 -- event/event.sh@23 -- # for i in {0..2} 00:09:31.273 11:53:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:31.273 spdk_app_start Round 0 00:09:31.273 11:53:36 -- event/event.sh@25 -- # waitforlisten 66773 /var/tmp/spdk-nbd.sock 00:09:31.273 11:53:36 -- common/autotest_common.sh@829 -- # '[' -z 66773 ']' 00:09:31.273 11:53:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:31.273 11:53:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:31.273 11:53:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:31.273 11:53:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.273 11:53:36 -- common/autotest_common.sh@10 -- # set +x 00:09:31.274 [2024-11-29 11:53:36.750381] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:31.274 [2024-11-29 11:53:36.750493] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66773 ] 00:09:31.533 [2024-11-29 11:53:36.888789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:31.533 [2024-11-29 11:53:37.018832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.534 [2024-11-29 11:53:37.018841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.470 11:53:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.470 11:53:37 -- common/autotest_common.sh@862 -- # return 0 00:09:32.470 11:53:37 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:32.729 Malloc0 00:09:32.729 11:53:38 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:32.987 Malloc1 00:09:32.987 11:53:38 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:32.987 11:53:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.987 11:53:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:32.987 11:53:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:32.987 11:53:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.987 11:53:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:32.988 11:53:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:32.988 11:53:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.988 11:53:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:32.988 11:53:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:32.988 11:53:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.988 11:53:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:32.988 11:53:38 -- bdev/nbd_common.sh@12 -- # local i 00:09:32.988 11:53:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:32.988 11:53:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:32.988 11:53:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:33.247 /dev/nbd0 00:09:33.247 11:53:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:33.247 11:53:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:33.247 11:53:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:33.247 11:53:38 -- common/autotest_common.sh@867 -- # local i 00:09:33.247 11:53:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:33.247 11:53:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:33.247 11:53:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:33.247 11:53:38 -- common/autotest_common.sh@871 -- # break 00:09:33.247 11:53:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:33.247 11:53:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:33.247 11:53:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:33.247 1+0 records in 00:09:33.247 1+0 records out 00:09:33.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361396 s, 11.3 MB/s 00:09:33.247 11:53:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:33.247 11:53:38 -- common/autotest_common.sh@884 -- # size=4096 00:09:33.247 11:53:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:33.247 11:53:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:33.247 11:53:38 -- common/autotest_common.sh@887 -- # return 0 00:09:33.247 11:53:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:33.247 11:53:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:33.247 11:53:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:33.506 /dev/nbd1 00:09:33.506 11:53:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:33.506 11:53:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:33.506 11:53:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:33.506 11:53:38 -- common/autotest_common.sh@867 -- # local i 00:09:33.506 11:53:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:33.506 11:53:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:33.506 11:53:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:33.506 11:53:38 -- common/autotest_common.sh@871 -- # break 00:09:33.506 11:53:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:33.506 11:53:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:33.506 11:53:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:33.506 1+0 records in 00:09:33.506 1+0 records out 00:09:33.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354231 s, 11.6 MB/s 00:09:33.506 11:53:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:33.506 11:53:38 -- common/autotest_common.sh@884 -- # size=4096 00:09:33.506 11:53:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:33.506 11:53:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:33.506 11:53:38 -- common/autotest_common.sh@887 -- # return 0 00:09:33.506 11:53:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:33.506 11:53:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:33.506 11:53:38 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.506 11:53:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.506 11:53:38 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:34.070 { 00:09:34.070 "nbd_device": "/dev/nbd0", 00:09:34.070 "bdev_name": "Malloc0" 00:09:34.070 }, 00:09:34.070 { 00:09:34.070 "nbd_device": "/dev/nbd1", 00:09:34.070 "bdev_name": "Malloc1" 00:09:34.070 } 00:09:34.070 ]' 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:34.070 { 00:09:34.070 "nbd_device": "/dev/nbd0", 00:09:34.070 "bdev_name": "Malloc0" 00:09:34.070 }, 00:09:34.070 { 00:09:34.070 "nbd_device": "/dev/nbd1", 00:09:34.070 "bdev_name": "Malloc1" 00:09:34.070 } 00:09:34.070 ]' 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:34.070 /dev/nbd1' 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:34.070 /dev/nbd1' 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@65 -- # count=2 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@95 -- # count=2 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:34.070 256+0 records in 00:09:34.070 256+0 records out 00:09:34.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00876941 s, 120 MB/s 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:34.070 256+0 records in 00:09:34.070 256+0 records out 00:09:34.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284543 s, 36.9 MB/s 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:34.070 256+0 records in 00:09:34.070 256+0 records out 00:09:34.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222595 s, 47.1 MB/s 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@51 -- # local i 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.070 11:53:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:34.329 11:53:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:34.329 11:53:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:34.329 11:53:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:34.329 11:53:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.329 11:53:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.329 11:53:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:34.329 11:53:39 -- bdev/nbd_common.sh@41 -- # break 00:09:34.329 11:53:39 -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.329 11:53:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.329 11:53:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:34.587 11:53:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:34.587 11:53:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:34.587 11:53:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:34.587 11:53:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.587 11:53:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.587 11:53:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:34.587 11:53:40 -- bdev/nbd_common.sh@41 -- # break 00:09:34.587 11:53:40 -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.587 11:53:40 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:34.587 11:53:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.587 11:53:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:34.845 11:53:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:34.845 11:53:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:34.845 11:53:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:35.104 11:53:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:35.104 11:53:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:35.104 11:53:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:35.104 11:53:40 -- bdev/nbd_common.sh@65 -- # true 00:09:35.104 11:53:40 -- bdev/nbd_common.sh@65 -- # count=0 00:09:35.104 11:53:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:35.104 11:53:40 -- bdev/nbd_common.sh@104 -- # count=0 00:09:35.104 11:53:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:35.104 11:53:40 -- bdev/nbd_common.sh@109 -- # return 0 00:09:35.104 11:53:40 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:35.362 11:53:40 -- event/event.sh@35 -- # sleep 3 00:09:35.622 [2024-11-29 11:53:41.040794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:35.880 [2024-11-29 11:53:41.154701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.880 [2024-11-29 11:53:41.154712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.880 [2024-11-29 11:53:41.230704] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:35.881 [2024-11-29 11:53:41.230793] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:38.415 11:53:43 -- event/event.sh@23 -- # for i in {0..2} 00:09:38.415 spdk_app_start Round 1 00:09:38.415 11:53:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:38.415 11:53:43 -- event/event.sh@25 -- # waitforlisten 66773 /var/tmp/spdk-nbd.sock 00:09:38.415 11:53:43 -- common/autotest_common.sh@829 -- # '[' -z 66773 ']' 00:09:38.415 11:53:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:38.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:38.415 11:53:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.415 11:53:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:38.415 11:53:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.416 11:53:43 -- common/autotest_common.sh@10 -- # set +x 00:09:38.675 11:53:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:38.675 11:53:44 -- common/autotest_common.sh@862 -- # return 0 00:09:38.675 11:53:44 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:38.934 Malloc0 00:09:38.934 11:53:44 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:39.194 Malloc1 00:09:39.194 11:53:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:39.194 11:53:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.194 11:53:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:39.194 11:53:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:39.194 11:53:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:39.194 11:53:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:39.194 11:53:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:39.194 11:53:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.194 11:53:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:39.194 11:53:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:39.194 11:53:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:39.194 11:53:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:39.194 11:53:44 -- bdev/nbd_common.sh@12 -- # local i 00:09:39.194 11:53:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:39.194 11:53:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:39.194 11:53:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:39.453 /dev/nbd0 00:09:39.453 11:53:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:39.453 11:53:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:39.453 11:53:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:39.453 11:53:44 -- common/autotest_common.sh@867 -- # local i 00:09:39.453 11:53:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:39.453 11:53:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:39.453 11:53:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:39.453 11:53:44 -- common/autotest_common.sh@871 -- # break 00:09:39.453 11:53:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:39.453 11:53:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:39.453 11:53:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:39.453 1+0 records in 00:09:39.453 1+0 records out 00:09:39.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000172962 s, 23.7 MB/s 00:09:39.453 11:53:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:39.453 11:53:44 -- common/autotest_common.sh@884 -- # size=4096 00:09:39.453 11:53:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:39.453 11:53:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:39.453 11:53:44 -- common/autotest_common.sh@887 -- # return 0 00:09:39.453 11:53:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:39.453 11:53:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:39.453 11:53:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:39.712 /dev/nbd1 00:09:39.712 11:53:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:39.712 11:53:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:39.712 11:53:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:39.712 11:53:45 -- common/autotest_common.sh@867 -- # local i 00:09:39.712 11:53:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:39.712 11:53:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:39.712 11:53:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:39.713 11:53:45 -- common/autotest_common.sh@871 -- # break 00:09:39.713 11:53:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:39.713 11:53:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:39.713 11:53:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:39.713 1+0 records in 00:09:39.713 1+0 records out 00:09:39.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324959 s, 12.6 MB/s 00:09:39.713 11:53:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:39.713 11:53:45 -- common/autotest_common.sh@884 -- # size=4096 00:09:39.713 11:53:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:39.713 11:53:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:39.713 11:53:45 -- common/autotest_common.sh@887 -- # return 0 00:09:39.713 11:53:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:39.713 11:53:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:39.713 11:53:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:39.713 11:53:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.713 11:53:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:40.280 { 00:09:40.280 "nbd_device": "/dev/nbd0", 00:09:40.280 "bdev_name": "Malloc0" 00:09:40.280 }, 00:09:40.280 { 00:09:40.280 "nbd_device": "/dev/nbd1", 00:09:40.280 "bdev_name": "Malloc1" 00:09:40.280 } 00:09:40.280 ]' 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:40.280 { 00:09:40.280 "nbd_device": "/dev/nbd0", 00:09:40.280 "bdev_name": "Malloc0" 00:09:40.280 }, 00:09:40.280 { 00:09:40.280 "nbd_device": "/dev/nbd1", 00:09:40.280 "bdev_name": "Malloc1" 00:09:40.280 } 00:09:40.280 ]' 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:40.280 /dev/nbd1' 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:40.280 /dev/nbd1' 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@65 -- # count=2 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@95 -- # count=2 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:40.280 256+0 records in 00:09:40.280 256+0 records out 00:09:40.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639709 s, 164 MB/s 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:40.280 11:53:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:40.280 256+0 records in 00:09:40.280 256+0 records out 00:09:40.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221673 s, 47.3 MB/s 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:40.281 256+0 records in 00:09:40.281 256+0 records out 00:09:40.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285431 s, 36.7 MB/s 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@51 -- # local i 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.281 11:53:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:40.540 11:53:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:40.540 11:53:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:40.540 11:53:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:40.540 11:53:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:40.540 11:53:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.540 11:53:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:40.540 11:53:45 -- bdev/nbd_common.sh@41 -- # break 00:09:40.540 11:53:45 -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.540 11:53:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.540 11:53:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:40.801 11:53:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:40.801 11:53:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:40.801 11:53:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:40.801 11:53:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:40.801 11:53:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.801 11:53:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:40.801 11:53:46 -- bdev/nbd_common.sh@41 -- # break 00:09:40.801 11:53:46 -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.801 11:53:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:40.801 11:53:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.801 11:53:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:41.059 11:53:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:41.059 11:53:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:41.059 11:53:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:41.319 11:53:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:41.319 11:53:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:41.319 11:53:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:41.319 11:53:46 -- bdev/nbd_common.sh@65 -- # true 00:09:41.319 11:53:46 -- bdev/nbd_common.sh@65 -- # count=0 00:09:41.319 11:53:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:41.319 11:53:46 -- bdev/nbd_common.sh@104 -- # count=0 00:09:41.319 11:53:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:41.319 11:53:46 -- bdev/nbd_common.sh@109 -- # return 0 00:09:41.319 11:53:46 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:41.579 11:53:46 -- event/event.sh@35 -- # sleep 3 00:09:41.838 [2024-11-29 11:53:47.191987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:41.838 [2024-11-29 11:53:47.293090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.838 [2024-11-29 11:53:47.293102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.097 [2024-11-29 11:53:47.371382] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:42.097 [2024-11-29 11:53:47.371495] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:44.635 spdk_app_start Round 2 00:09:44.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:44.635 11:53:49 -- event/event.sh@23 -- # for i in {0..2} 00:09:44.635 11:53:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:44.635 11:53:49 -- event/event.sh@25 -- # waitforlisten 66773 /var/tmp/spdk-nbd.sock 00:09:44.635 11:53:49 -- common/autotest_common.sh@829 -- # '[' -z 66773 ']' 00:09:44.635 11:53:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:44.635 11:53:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:44.635 11:53:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:44.635 11:53:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:44.635 11:53:49 -- common/autotest_common.sh@10 -- # set +x 00:09:44.893 11:53:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:44.893 11:53:50 -- common/autotest_common.sh@862 -- # return 0 00:09:44.893 11:53:50 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:45.152 Malloc0 00:09:45.152 11:53:50 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:45.411 Malloc1 00:09:45.411 11:53:50 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:45.411 11:53:50 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.411 11:53:50 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:45.411 11:53:50 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:45.411 11:53:50 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:45.411 11:53:50 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:45.411 11:53:50 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:45.411 11:53:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.411 11:53:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:45.411 11:53:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:45.411 11:53:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:45.411 11:53:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:45.411 11:53:50 -- bdev/nbd_common.sh@12 -- # local i 00:09:45.411 11:53:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:45.411 11:53:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:45.411 11:53:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:45.669 /dev/nbd0 00:09:45.669 11:53:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:45.669 11:53:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:45.669 11:53:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:45.669 11:53:51 -- common/autotest_common.sh@867 -- # local i 00:09:45.669 11:53:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:45.669 11:53:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:45.669 11:53:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:45.669 11:53:51 -- common/autotest_common.sh@871 -- # break 00:09:45.669 11:53:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:45.669 11:53:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:45.669 11:53:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:45.669 1+0 records in 00:09:45.669 1+0 records out 00:09:45.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040403 s, 10.1 MB/s 00:09:45.669 11:53:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:45.669 11:53:51 -- common/autotest_common.sh@884 -- # size=4096 00:09:45.669 11:53:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:45.669 11:53:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:45.669 11:53:51 -- common/autotest_common.sh@887 -- # return 0 00:09:45.669 11:53:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:45.669 11:53:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:45.669 11:53:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:45.928 /dev/nbd1 00:09:45.928 11:53:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:45.928 11:53:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:45.928 11:53:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:45.928 11:53:51 -- common/autotest_common.sh@867 -- # local i 00:09:45.928 11:53:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:45.928 11:53:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:45.928 11:53:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:45.928 11:53:51 -- common/autotest_common.sh@871 -- # break 00:09:45.928 11:53:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:45.928 11:53:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:45.928 11:53:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:45.928 1+0 records in 00:09:45.928 1+0 records out 00:09:45.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409645 s, 10.0 MB/s 00:09:45.928 11:53:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:45.928 11:53:51 -- common/autotest_common.sh@884 -- # size=4096 00:09:45.928 11:53:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:45.928 11:53:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:45.928 11:53:51 -- common/autotest_common.sh@887 -- # return 0 00:09:45.928 11:53:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:45.928 11:53:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:45.928 11:53:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:45.928 11:53:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.928 11:53:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:46.186 11:53:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:46.186 { 00:09:46.186 "nbd_device": "/dev/nbd0", 00:09:46.186 "bdev_name": "Malloc0" 00:09:46.186 }, 00:09:46.186 { 00:09:46.186 "nbd_device": "/dev/nbd1", 00:09:46.186 "bdev_name": "Malloc1" 00:09:46.186 } 00:09:46.186 ]' 00:09:46.186 11:53:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:46.186 { 00:09:46.186 "nbd_device": "/dev/nbd0", 00:09:46.186 "bdev_name": "Malloc0" 00:09:46.186 }, 00:09:46.186 { 00:09:46.186 "nbd_device": "/dev/nbd1", 00:09:46.186 "bdev_name": "Malloc1" 00:09:46.186 } 00:09:46.186 ]' 00:09:46.186 11:53:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:46.445 11:53:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:46.445 /dev/nbd1' 00:09:46.445 11:53:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:46.445 /dev/nbd1' 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@65 -- # count=2 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@95 -- # count=2 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:46.446 256+0 records in 00:09:46.446 256+0 records out 00:09:46.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110578 s, 94.8 MB/s 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:46.446 256+0 records in 00:09:46.446 256+0 records out 00:09:46.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222863 s, 47.1 MB/s 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:46.446 256+0 records in 00:09:46.446 256+0 records out 00:09:46.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244381 s, 42.9 MB/s 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@51 -- # local i 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.446 11:53:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:46.705 11:53:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:46.705 11:53:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:46.705 11:53:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:46.705 11:53:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.705 11:53:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.705 11:53:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:46.705 11:53:52 -- bdev/nbd_common.sh@41 -- # break 00:09:46.705 11:53:52 -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.705 11:53:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.705 11:53:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:47.004 11:53:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:47.004 11:53:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:47.004 11:53:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:47.004 11:53:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.004 11:53:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.004 11:53:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:47.004 11:53:52 -- bdev/nbd_common.sh@41 -- # break 00:09:47.004 11:53:52 -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.004 11:53:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:47.004 11:53:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.004 11:53:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:47.262 11:53:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:47.262 11:53:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:47.262 11:53:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:47.262 11:53:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:47.262 11:53:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:47.262 11:53:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:47.262 11:53:52 -- bdev/nbd_common.sh@65 -- # true 00:09:47.262 11:53:52 -- bdev/nbd_common.sh@65 -- # count=0 00:09:47.262 11:53:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:47.262 11:53:52 -- bdev/nbd_common.sh@104 -- # count=0 00:09:47.262 11:53:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:47.262 11:53:52 -- bdev/nbd_common.sh@109 -- # return 0 00:09:47.262 11:53:52 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:47.520 11:53:52 -- event/event.sh@35 -- # sleep 3 00:09:47.779 [2024-11-29 11:53:53.147853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:47.779 [2024-11-29 11:53:53.248010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.779 [2024-11-29 11:53:53.248057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.038 [2024-11-29 11:53:53.309101] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:48.038 [2024-11-29 11:53:53.309202] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:50.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:50.571 11:53:55 -- event/event.sh@38 -- # waitforlisten 66773 /var/tmp/spdk-nbd.sock 00:09:50.571 11:53:55 -- common/autotest_common.sh@829 -- # '[' -z 66773 ']' 00:09:50.571 11:53:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:50.571 11:53:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.571 11:53:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:50.571 11:53:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.571 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:09:50.830 11:53:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.830 11:53:56 -- common/autotest_common.sh@862 -- # return 0 00:09:50.830 11:53:56 -- event/event.sh@39 -- # killprocess 66773 00:09:50.830 11:53:56 -- common/autotest_common.sh@936 -- # '[' -z 66773 ']' 00:09:50.830 11:53:56 -- common/autotest_common.sh@940 -- # kill -0 66773 00:09:50.830 11:53:56 -- common/autotest_common.sh@941 -- # uname 00:09:50.830 11:53:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:50.830 11:53:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66773 00:09:50.830 killing process with pid 66773 00:09:50.830 11:53:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:50.830 11:53:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:50.830 11:53:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66773' 00:09:50.830 11:53:56 -- common/autotest_common.sh@955 -- # kill 66773 00:09:50.830 11:53:56 -- common/autotest_common.sh@960 -- # wait 66773 00:09:51.088 spdk_app_start is called in Round 0. 00:09:51.088 Shutdown signal received, stop current app iteration 00:09:51.088 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:09:51.088 spdk_app_start is called in Round 1. 00:09:51.088 Shutdown signal received, stop current app iteration 00:09:51.088 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:09:51.088 spdk_app_start is called in Round 2. 00:09:51.088 Shutdown signal received, stop current app iteration 00:09:51.088 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:09:51.088 spdk_app_start is called in Round 3. 00:09:51.088 Shutdown signal received, stop current app iteration 00:09:51.088 11:53:56 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:51.088 11:53:56 -- event/event.sh@42 -- # return 0 00:09:51.088 00:09:51.088 real 0m19.760s 00:09:51.088 user 0m44.147s 00:09:51.088 sys 0m3.336s 00:09:51.088 11:53:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:51.088 11:53:56 -- common/autotest_common.sh@10 -- # set +x 00:09:51.088 ************************************ 00:09:51.088 END TEST app_repeat 00:09:51.088 ************************************ 00:09:51.088 11:53:56 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:51.088 11:53:56 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:51.088 11:53:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:51.088 11:53:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.088 11:53:56 -- common/autotest_common.sh@10 -- # set +x 00:09:51.088 ************************************ 00:09:51.088 START TEST cpu_locks 00:09:51.088 ************************************ 00:09:51.088 11:53:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:51.347 * Looking for test storage... 00:09:51.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:51.347 11:53:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:51.347 11:53:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:51.347 11:53:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:51.347 11:53:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:51.347 11:53:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:51.347 11:53:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:51.347 11:53:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:51.347 11:53:56 -- scripts/common.sh@335 -- # IFS=.-: 00:09:51.347 11:53:56 -- scripts/common.sh@335 -- # read -ra ver1 00:09:51.347 11:53:56 -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.347 11:53:56 -- scripts/common.sh@336 -- # read -ra ver2 00:09:51.347 11:53:56 -- scripts/common.sh@337 -- # local 'op=<' 00:09:51.347 11:53:56 -- scripts/common.sh@339 -- # ver1_l=2 00:09:51.347 11:53:56 -- scripts/common.sh@340 -- # ver2_l=1 00:09:51.347 11:53:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:51.347 11:53:56 -- scripts/common.sh@343 -- # case "$op" in 00:09:51.347 11:53:56 -- scripts/common.sh@344 -- # : 1 00:09:51.347 11:53:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:51.347 11:53:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.347 11:53:56 -- scripts/common.sh@364 -- # decimal 1 00:09:51.347 11:53:56 -- scripts/common.sh@352 -- # local d=1 00:09:51.347 11:53:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.347 11:53:56 -- scripts/common.sh@354 -- # echo 1 00:09:51.347 11:53:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:51.347 11:53:56 -- scripts/common.sh@365 -- # decimal 2 00:09:51.347 11:53:56 -- scripts/common.sh@352 -- # local d=2 00:09:51.347 11:53:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.347 11:53:56 -- scripts/common.sh@354 -- # echo 2 00:09:51.347 11:53:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:51.347 11:53:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:51.347 11:53:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:51.347 11:53:56 -- scripts/common.sh@367 -- # return 0 00:09:51.347 11:53:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.347 11:53:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:51.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.347 --rc genhtml_branch_coverage=1 00:09:51.347 --rc genhtml_function_coverage=1 00:09:51.347 --rc genhtml_legend=1 00:09:51.347 --rc geninfo_all_blocks=1 00:09:51.347 --rc geninfo_unexecuted_blocks=1 00:09:51.347 00:09:51.347 ' 00:09:51.347 11:53:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:51.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.347 --rc genhtml_branch_coverage=1 00:09:51.347 --rc genhtml_function_coverage=1 00:09:51.347 --rc genhtml_legend=1 00:09:51.347 --rc geninfo_all_blocks=1 00:09:51.347 --rc geninfo_unexecuted_blocks=1 00:09:51.347 00:09:51.347 ' 00:09:51.347 11:53:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:51.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.347 --rc genhtml_branch_coverage=1 00:09:51.347 --rc genhtml_function_coverage=1 00:09:51.347 --rc genhtml_legend=1 00:09:51.347 --rc geninfo_all_blocks=1 00:09:51.347 --rc geninfo_unexecuted_blocks=1 00:09:51.347 00:09:51.347 ' 00:09:51.347 11:53:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:51.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.347 --rc genhtml_branch_coverage=1 00:09:51.347 --rc genhtml_function_coverage=1 00:09:51.347 --rc genhtml_legend=1 00:09:51.347 --rc geninfo_all_blocks=1 00:09:51.347 --rc geninfo_unexecuted_blocks=1 00:09:51.347 00:09:51.347 ' 00:09:51.347 11:53:56 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:51.347 11:53:56 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:51.347 11:53:56 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:51.347 11:53:56 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:51.347 11:53:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:51.347 11:53:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.347 11:53:56 -- common/autotest_common.sh@10 -- # set +x 00:09:51.347 ************************************ 00:09:51.347 START TEST default_locks 00:09:51.347 ************************************ 00:09:51.347 11:53:56 -- common/autotest_common.sh@1114 -- # default_locks 00:09:51.347 11:53:56 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=67225 00:09:51.347 11:53:56 -- event/cpu_locks.sh@47 -- # waitforlisten 67225 00:09:51.347 11:53:56 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:51.347 11:53:56 -- common/autotest_common.sh@829 -- # '[' -z 67225 ']' 00:09:51.347 11:53:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.348 11:53:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:51.348 11:53:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.348 11:53:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:51.348 11:53:56 -- common/autotest_common.sh@10 -- # set +x 00:09:51.348 [2024-11-29 11:53:56.805333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:51.348 [2024-11-29 11:53:56.805452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67225 ] 00:09:51.605 [2024-11-29 11:53:56.946791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.605 [2024-11-29 11:53:57.028919] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:51.605 [2024-11-29 11:53:57.029130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.540 11:53:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:52.540 11:53:57 -- common/autotest_common.sh@862 -- # return 0 00:09:52.540 11:53:57 -- event/cpu_locks.sh@49 -- # locks_exist 67225 00:09:52.540 11:53:57 -- event/cpu_locks.sh@22 -- # lslocks -p 67225 00:09:52.540 11:53:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:52.800 11:53:58 -- event/cpu_locks.sh@50 -- # killprocess 67225 00:09:52.800 11:53:58 -- common/autotest_common.sh@936 -- # '[' -z 67225 ']' 00:09:52.800 11:53:58 -- common/autotest_common.sh@940 -- # kill -0 67225 00:09:52.800 11:53:58 -- common/autotest_common.sh@941 -- # uname 00:09:52.800 11:53:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:52.800 11:53:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67225 00:09:52.800 11:53:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:52.800 killing process with pid 67225 00:09:52.800 11:53:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:52.800 11:53:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67225' 00:09:52.800 11:53:58 -- common/autotest_common.sh@955 -- # kill 67225 00:09:52.800 11:53:58 -- common/autotest_common.sh@960 -- # wait 67225 00:09:53.442 11:53:58 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 67225 00:09:53.442 11:53:58 -- common/autotest_common.sh@650 -- # local es=0 00:09:53.442 11:53:58 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67225 00:09:53.442 11:53:58 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:53.442 11:53:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:53.443 11:53:58 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:53.443 11:53:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:53.443 11:53:58 -- common/autotest_common.sh@653 -- # waitforlisten 67225 00:09:53.443 11:53:58 -- common/autotest_common.sh@829 -- # '[' -z 67225 ']' 00:09:53.443 11:53:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.443 11:53:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:53.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.443 11:53:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.443 11:53:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:53.443 11:53:58 -- common/autotest_common.sh@10 -- # set +x 00:09:53.443 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67225) - No such process 00:09:53.443 ERROR: process (pid: 67225) is no longer running 00:09:53.443 11:53:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.443 11:53:58 -- common/autotest_common.sh@862 -- # return 1 00:09:53.443 11:53:58 -- common/autotest_common.sh@653 -- # es=1 00:09:53.443 11:53:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:53.443 11:53:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:53.443 11:53:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:53.443 11:53:58 -- event/cpu_locks.sh@54 -- # no_locks 00:09:53.443 11:53:58 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:53.443 11:53:58 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:53.443 11:53:58 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:53.443 00:09:53.443 real 0m2.061s 00:09:53.443 user 0m2.204s 00:09:53.443 sys 0m0.567s 00:09:53.443 11:53:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:53.443 ************************************ 00:09:53.443 END TEST default_locks 00:09:53.443 ************************************ 00:09:53.443 11:53:58 -- common/autotest_common.sh@10 -- # set +x 00:09:53.443 11:53:58 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:53.443 11:53:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:53.443 11:53:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.443 11:53:58 -- common/autotest_common.sh@10 -- # set +x 00:09:53.443 ************************************ 00:09:53.443 START TEST default_locks_via_rpc 00:09:53.443 ************************************ 00:09:53.443 11:53:58 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:09:53.443 11:53:58 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=67277 00:09:53.443 11:53:58 -- event/cpu_locks.sh@63 -- # waitforlisten 67277 00:09:53.443 11:53:58 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:53.443 11:53:58 -- common/autotest_common.sh@829 -- # '[' -z 67277 ']' 00:09:53.443 11:53:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.443 11:53:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:53.443 11:53:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.443 11:53:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:53.443 11:53:58 -- common/autotest_common.sh@10 -- # set +x 00:09:53.443 [2024-11-29 11:53:58.922213] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:53.443 [2024-11-29 11:53:58.922339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67277 ] 00:09:53.701 [2024-11-29 11:53:59.057085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.701 [2024-11-29 11:53:59.183629] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:53.701 [2024-11-29 11:53:59.183811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.636 11:53:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:54.636 11:53:59 -- common/autotest_common.sh@862 -- # return 0 00:09:54.636 11:53:59 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:54.636 11:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.636 11:53:59 -- common/autotest_common.sh@10 -- # set +x 00:09:54.636 11:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.636 11:53:59 -- event/cpu_locks.sh@67 -- # no_locks 00:09:54.636 11:53:59 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:54.636 11:53:59 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:54.636 11:53:59 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:54.636 11:53:59 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:54.636 11:53:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.636 11:53:59 -- common/autotest_common.sh@10 -- # set +x 00:09:54.636 11:53:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.636 11:53:59 -- event/cpu_locks.sh@71 -- # locks_exist 67277 00:09:54.636 11:53:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:54.636 11:53:59 -- event/cpu_locks.sh@22 -- # lslocks -p 67277 00:09:54.895 11:54:00 -- event/cpu_locks.sh@73 -- # killprocess 67277 00:09:54.895 11:54:00 -- common/autotest_common.sh@936 -- # '[' -z 67277 ']' 00:09:54.895 11:54:00 -- common/autotest_common.sh@940 -- # kill -0 67277 00:09:54.895 11:54:00 -- common/autotest_common.sh@941 -- # uname 00:09:54.895 11:54:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:54.895 11:54:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67277 00:09:54.895 killing process with pid 67277 00:09:54.895 11:54:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:54.895 11:54:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:54.895 11:54:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67277' 00:09:54.895 11:54:00 -- common/autotest_common.sh@955 -- # kill 67277 00:09:54.895 11:54:00 -- common/autotest_common.sh@960 -- # wait 67277 00:09:55.463 00:09:55.463 real 0m2.065s 00:09:55.463 user 0m2.128s 00:09:55.463 sys 0m0.674s 00:09:55.463 11:54:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:55.463 ************************************ 00:09:55.463 END TEST default_locks_via_rpc 00:09:55.463 ************************************ 00:09:55.463 11:54:00 -- common/autotest_common.sh@10 -- # set +x 00:09:55.722 11:54:00 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:55.722 11:54:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:55.722 11:54:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.722 11:54:00 -- common/autotest_common.sh@10 -- # set +x 00:09:55.722 ************************************ 00:09:55.722 START TEST non_locking_app_on_locked_coremask 00:09:55.722 ************************************ 00:09:55.722 11:54:00 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:09:55.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.722 11:54:00 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67328 00:09:55.722 11:54:00 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:55.722 11:54:00 -- event/cpu_locks.sh@81 -- # waitforlisten 67328 /var/tmp/spdk.sock 00:09:55.722 11:54:00 -- common/autotest_common.sh@829 -- # '[' -z 67328 ']' 00:09:55.722 11:54:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.722 11:54:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:55.722 11:54:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.722 11:54:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:55.722 11:54:00 -- common/autotest_common.sh@10 -- # set +x 00:09:55.722 [2024-11-29 11:54:01.047545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:55.722 [2024-11-29 11:54:01.047988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67328 ] 00:09:55.722 [2024-11-29 11:54:01.186683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.981 [2024-11-29 11:54:01.321493] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:55.981 [2024-11-29 11:54:01.322047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.550 11:54:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:56.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:56.550 11:54:02 -- common/autotest_common.sh@862 -- # return 0 00:09:56.550 11:54:02 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67344 00:09:56.550 11:54:02 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:56.550 11:54:02 -- event/cpu_locks.sh@85 -- # waitforlisten 67344 /var/tmp/spdk2.sock 00:09:56.550 11:54:02 -- common/autotest_common.sh@829 -- # '[' -z 67344 ']' 00:09:56.550 11:54:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:56.550 11:54:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.550 11:54:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:56.550 11:54:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.550 11:54:02 -- common/autotest_common.sh@10 -- # set +x 00:09:56.807 [2024-11-29 11:54:02.088125] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:56.807 [2024-11-29 11:54:02.088537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67344 ] 00:09:56.807 [2024-11-29 11:54:02.231105] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:56.807 [2024-11-29 11:54:02.231175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.066 [2024-11-29 11:54:02.486148] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:57.066 [2024-11-29 11:54:02.486359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.445 11:54:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.445 11:54:03 -- common/autotest_common.sh@862 -- # return 0 00:09:58.445 11:54:03 -- event/cpu_locks.sh@87 -- # locks_exist 67328 00:09:58.445 11:54:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:58.445 11:54:03 -- event/cpu_locks.sh@22 -- # lslocks -p 67328 00:09:59.419 11:54:04 -- event/cpu_locks.sh@89 -- # killprocess 67328 00:09:59.419 11:54:04 -- common/autotest_common.sh@936 -- # '[' -z 67328 ']' 00:09:59.419 11:54:04 -- common/autotest_common.sh@940 -- # kill -0 67328 00:09:59.419 11:54:04 -- common/autotest_common.sh@941 -- # uname 00:09:59.419 11:54:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:59.419 11:54:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67328 00:09:59.419 killing process with pid 67328 00:09:59.419 11:54:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:59.419 11:54:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:59.419 11:54:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67328' 00:09:59.419 11:54:04 -- common/autotest_common.sh@955 -- # kill 67328 00:09:59.419 11:54:04 -- common/autotest_common.sh@960 -- # wait 67328 00:10:00.356 11:54:05 -- event/cpu_locks.sh@90 -- # killprocess 67344 00:10:00.356 11:54:05 -- common/autotest_common.sh@936 -- # '[' -z 67344 ']' 00:10:00.356 11:54:05 -- common/autotest_common.sh@940 -- # kill -0 67344 00:10:00.356 11:54:05 -- common/autotest_common.sh@941 -- # uname 00:10:00.356 11:54:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:00.356 11:54:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67344 00:10:00.356 killing process with pid 67344 00:10:00.356 11:54:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:00.356 11:54:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:00.356 11:54:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67344' 00:10:00.356 11:54:05 -- common/autotest_common.sh@955 -- # kill 67344 00:10:00.356 11:54:05 -- common/autotest_common.sh@960 -- # wait 67344 00:10:00.926 ************************************ 00:10:00.926 END TEST non_locking_app_on_locked_coremask 00:10:00.926 ************************************ 00:10:00.926 00:10:00.926 real 0m5.322s 00:10:00.926 user 0m5.803s 00:10:00.926 sys 0m1.338s 00:10:00.926 11:54:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:00.926 11:54:06 -- common/autotest_common.sh@10 -- # set +x 00:10:00.926 11:54:06 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:00.926 11:54:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:00.926 11:54:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:00.926 11:54:06 -- common/autotest_common.sh@10 -- # set +x 00:10:00.926 ************************************ 00:10:00.926 START TEST locking_app_on_unlocked_coremask 00:10:00.926 ************************************ 00:10:00.926 11:54:06 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:10:00.926 11:54:06 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67424 00:10:00.926 11:54:06 -- event/cpu_locks.sh@99 -- # waitforlisten 67424 /var/tmp/spdk.sock 00:10:00.926 11:54:06 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:00.926 11:54:06 -- common/autotest_common.sh@829 -- # '[' -z 67424 ']' 00:10:00.926 11:54:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.926 11:54:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.926 11:54:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.926 11:54:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.926 11:54:06 -- common/autotest_common.sh@10 -- # set +x 00:10:00.926 [2024-11-29 11:54:06.428909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:00.926 [2024-11-29 11:54:06.429019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67424 ] 00:10:01.186 [2024-11-29 11:54:06.568781] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:01.186 [2024-11-29 11:54:06.568832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.186 [2024-11-29 11:54:06.668951] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:01.186 [2024-11-29 11:54:06.669184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.124 11:54:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:02.124 11:54:07 -- common/autotest_common.sh@862 -- # return 0 00:10:02.124 11:54:07 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67445 00:10:02.124 11:54:07 -- event/cpu_locks.sh@103 -- # waitforlisten 67445 /var/tmp/spdk2.sock 00:10:02.124 11:54:07 -- common/autotest_common.sh@829 -- # '[' -z 67445 ']' 00:10:02.124 11:54:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:02.124 11:54:07 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:02.124 11:54:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:02.124 11:54:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:02.124 11:54:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.124 11:54:07 -- common/autotest_common.sh@10 -- # set +x 00:10:02.124 [2024-11-29 11:54:07.543421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:02.124 [2024-11-29 11:54:07.543608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67445 ] 00:10:02.383 [2024-11-29 11:54:07.684219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.641 [2024-11-29 11:54:07.894713] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:02.641 [2024-11-29 11:54:07.894883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.208 11:54:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.208 11:54:08 -- common/autotest_common.sh@862 -- # return 0 00:10:03.208 11:54:08 -- event/cpu_locks.sh@105 -- # locks_exist 67445 00:10:03.208 11:54:08 -- event/cpu_locks.sh@22 -- # lslocks -p 67445 00:10:03.208 11:54:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:04.146 11:54:09 -- event/cpu_locks.sh@107 -- # killprocess 67424 00:10:04.146 11:54:09 -- common/autotest_common.sh@936 -- # '[' -z 67424 ']' 00:10:04.146 11:54:09 -- common/autotest_common.sh@940 -- # kill -0 67424 00:10:04.146 11:54:09 -- common/autotest_common.sh@941 -- # uname 00:10:04.146 11:54:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:04.146 11:54:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67424 00:10:04.146 killing process with pid 67424 00:10:04.146 11:54:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:04.146 11:54:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:04.146 11:54:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67424' 00:10:04.146 11:54:09 -- common/autotest_common.sh@955 -- # kill 67424 00:10:04.146 11:54:09 -- common/autotest_common.sh@960 -- # wait 67424 00:10:04.714 11:54:10 -- event/cpu_locks.sh@108 -- # killprocess 67445 00:10:04.714 11:54:10 -- common/autotest_common.sh@936 -- # '[' -z 67445 ']' 00:10:04.714 11:54:10 -- common/autotest_common.sh@940 -- # kill -0 67445 00:10:04.714 11:54:10 -- common/autotest_common.sh@941 -- # uname 00:10:04.714 11:54:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:04.714 11:54:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67445 00:10:04.974 11:54:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:04.974 11:54:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:04.974 killing process with pid 67445 00:10:04.974 11:54:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67445' 00:10:04.974 11:54:10 -- common/autotest_common.sh@955 -- # kill 67445 00:10:04.974 11:54:10 -- common/autotest_common.sh@960 -- # wait 67445 00:10:05.233 00:10:05.233 real 0m4.271s 00:10:05.233 user 0m4.794s 00:10:05.233 sys 0m1.189s 00:10:05.233 11:54:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:05.233 ************************************ 00:10:05.233 END TEST locking_app_on_unlocked_coremask 00:10:05.233 ************************************ 00:10:05.233 11:54:10 -- common/autotest_common.sh@10 -- # set +x 00:10:05.233 11:54:10 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:05.233 11:54:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:05.233 11:54:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:05.233 11:54:10 -- common/autotest_common.sh@10 -- # set +x 00:10:05.233 ************************************ 00:10:05.233 START TEST locking_app_on_locked_coremask 00:10:05.233 ************************************ 00:10:05.233 11:54:10 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:10:05.233 11:54:10 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67520 00:10:05.233 11:54:10 -- event/cpu_locks.sh@116 -- # waitforlisten 67520 /var/tmp/spdk.sock 00:10:05.233 11:54:10 -- common/autotest_common.sh@829 -- # '[' -z 67520 ']' 00:10:05.233 11:54:10 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:05.233 11:54:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.233 11:54:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:05.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.233 11:54:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.233 11:54:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:05.233 11:54:10 -- common/autotest_common.sh@10 -- # set +x 00:10:05.493 [2024-11-29 11:54:10.747713] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:05.493 [2024-11-29 11:54:10.747849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67520 ] 00:10:05.493 [2024-11-29 11:54:10.884504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.493 [2024-11-29 11:54:10.975277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:05.493 [2024-11-29 11:54:10.975737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.429 11:54:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.429 11:54:11 -- common/autotest_common.sh@862 -- # return 0 00:10:06.429 11:54:11 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67536 00:10:06.429 11:54:11 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:06.429 11:54:11 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67536 /var/tmp/spdk2.sock 00:10:06.429 11:54:11 -- common/autotest_common.sh@650 -- # local es=0 00:10:06.430 11:54:11 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67536 /var/tmp/spdk2.sock 00:10:06.430 11:54:11 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:06.430 11:54:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.430 11:54:11 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:06.430 11:54:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.430 11:54:11 -- common/autotest_common.sh@653 -- # waitforlisten 67536 /var/tmp/spdk2.sock 00:10:06.430 11:54:11 -- common/autotest_common.sh@829 -- # '[' -z 67536 ']' 00:10:06.430 11:54:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:06.430 11:54:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.430 11:54:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:06.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:06.430 11:54:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.430 11:54:11 -- common/autotest_common.sh@10 -- # set +x 00:10:06.430 [2024-11-29 11:54:11.863857] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:06.430 [2024-11-29 11:54:11.864351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67536 ] 00:10:06.688 [2024-11-29 11:54:12.005066] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67520 has claimed it. 00:10:06.688 [2024-11-29 11:54:12.005150] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:07.256 ERROR: process (pid: 67536) is no longer running 00:10:07.256 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67536) - No such process 00:10:07.256 11:54:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.256 11:54:12 -- common/autotest_common.sh@862 -- # return 1 00:10:07.256 11:54:12 -- common/autotest_common.sh@653 -- # es=1 00:10:07.256 11:54:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:07.256 11:54:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:07.256 11:54:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:07.256 11:54:12 -- event/cpu_locks.sh@122 -- # locks_exist 67520 00:10:07.256 11:54:12 -- event/cpu_locks.sh@22 -- # lslocks -p 67520 00:10:07.256 11:54:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:07.514 11:54:12 -- event/cpu_locks.sh@124 -- # killprocess 67520 00:10:07.514 11:54:12 -- common/autotest_common.sh@936 -- # '[' -z 67520 ']' 00:10:07.514 11:54:12 -- common/autotest_common.sh@940 -- # kill -0 67520 00:10:07.514 11:54:12 -- common/autotest_common.sh@941 -- # uname 00:10:07.514 11:54:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:07.514 11:54:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67520 00:10:07.514 killing process with pid 67520 00:10:07.514 11:54:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:07.514 11:54:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:07.514 11:54:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67520' 00:10:07.514 11:54:13 -- common/autotest_common.sh@955 -- # kill 67520 00:10:07.514 11:54:13 -- common/autotest_common.sh@960 -- # wait 67520 00:10:08.082 ************************************ 00:10:08.082 END TEST locking_app_on_locked_coremask 00:10:08.082 ************************************ 00:10:08.082 00:10:08.082 real 0m2.698s 00:10:08.082 user 0m3.143s 00:10:08.082 sys 0m0.647s 00:10:08.083 11:54:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:08.083 11:54:13 -- common/autotest_common.sh@10 -- # set +x 00:10:08.083 11:54:13 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:08.083 11:54:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:08.083 11:54:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:08.083 11:54:13 -- common/autotest_common.sh@10 -- # set +x 00:10:08.083 ************************************ 00:10:08.083 START TEST locking_overlapped_coremask 00:10:08.083 ************************************ 00:10:08.083 11:54:13 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:10:08.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.083 11:54:13 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67576 00:10:08.083 11:54:13 -- event/cpu_locks.sh@133 -- # waitforlisten 67576 /var/tmp/spdk.sock 00:10:08.083 11:54:13 -- common/autotest_common.sh@829 -- # '[' -z 67576 ']' 00:10:08.083 11:54:13 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:08.083 11:54:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.083 11:54:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.083 11:54:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.083 11:54:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.083 11:54:13 -- common/autotest_common.sh@10 -- # set +x 00:10:08.083 [2024-11-29 11:54:13.512550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:08.083 [2024-11-29 11:54:13.513077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67576 ] 00:10:08.342 [2024-11-29 11:54:13.651663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:08.342 [2024-11-29 11:54:13.750649] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:08.342 [2024-11-29 11:54:13.751244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.342 [2024-11-29 11:54:13.751377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.342 [2024-11-29 11:54:13.751387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.279 11:54:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.279 11:54:14 -- common/autotest_common.sh@862 -- # return 0 00:10:09.279 11:54:14 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:09.279 11:54:14 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67594 00:10:09.279 11:54:14 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67594 /var/tmp/spdk2.sock 00:10:09.279 11:54:14 -- common/autotest_common.sh@650 -- # local es=0 00:10:09.279 11:54:14 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67594 /var/tmp/spdk2.sock 00:10:09.279 11:54:14 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:09.279 11:54:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:09.279 11:54:14 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:09.279 11:54:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:09.279 11:54:14 -- common/autotest_common.sh@653 -- # waitforlisten 67594 /var/tmp/spdk2.sock 00:10:09.279 11:54:14 -- common/autotest_common.sh@829 -- # '[' -z 67594 ']' 00:10:09.279 11:54:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:09.279 11:54:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.279 11:54:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:09.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:09.279 11:54:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.279 11:54:14 -- common/autotest_common.sh@10 -- # set +x 00:10:09.279 [2024-11-29 11:54:14.570797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:09.279 [2024-11-29 11:54:14.571371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67594 ] 00:10:09.279 [2024-11-29 11:54:14.714223] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67576 has claimed it. 00:10:09.279 [2024-11-29 11:54:14.714313] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:09.846 ERROR: process (pid: 67594) is no longer running 00:10:09.846 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67594) - No such process 00:10:09.846 11:54:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.846 11:54:15 -- common/autotest_common.sh@862 -- # return 1 00:10:09.846 11:54:15 -- common/autotest_common.sh@653 -- # es=1 00:10:09.846 11:54:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:09.846 11:54:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:09.846 11:54:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:09.846 11:54:15 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:09.846 11:54:15 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:09.846 11:54:15 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:09.846 11:54:15 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:09.846 11:54:15 -- event/cpu_locks.sh@141 -- # killprocess 67576 00:10:09.846 11:54:15 -- common/autotest_common.sh@936 -- # '[' -z 67576 ']' 00:10:09.846 11:54:15 -- common/autotest_common.sh@940 -- # kill -0 67576 00:10:09.846 11:54:15 -- common/autotest_common.sh@941 -- # uname 00:10:09.846 11:54:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:09.846 11:54:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67576 00:10:10.104 11:54:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:10.104 11:54:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:10.104 11:54:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67576' 00:10:10.104 killing process with pid 67576 00:10:10.104 11:54:15 -- common/autotest_common.sh@955 -- # kill 67576 00:10:10.104 11:54:15 -- common/autotest_common.sh@960 -- # wait 67576 00:10:10.363 00:10:10.363 real 0m2.306s 00:10:10.363 user 0m6.479s 00:10:10.363 sys 0m0.456s 00:10:10.363 ************************************ 00:10:10.363 END TEST locking_overlapped_coremask 00:10:10.363 ************************************ 00:10:10.363 11:54:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:10.363 11:54:15 -- common/autotest_common.sh@10 -- # set +x 00:10:10.363 11:54:15 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:10.363 11:54:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:10.363 11:54:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:10.363 11:54:15 -- common/autotest_common.sh@10 -- # set +x 00:10:10.363 ************************************ 00:10:10.363 START TEST locking_overlapped_coremask_via_rpc 00:10:10.363 ************************************ 00:10:10.363 11:54:15 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:10:10.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.363 11:54:15 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67640 00:10:10.363 11:54:15 -- event/cpu_locks.sh@149 -- # waitforlisten 67640 /var/tmp/spdk.sock 00:10:10.363 11:54:15 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:10.363 11:54:15 -- common/autotest_common.sh@829 -- # '[' -z 67640 ']' 00:10:10.363 11:54:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.363 11:54:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.363 11:54:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.363 11:54:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.363 11:54:15 -- common/autotest_common.sh@10 -- # set +x 00:10:10.622 [2024-11-29 11:54:15.881570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:10.622 [2024-11-29 11:54:15.882770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67640 ] 00:10:10.622 [2024-11-29 11:54:16.021665] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:10.622 [2024-11-29 11:54:16.022027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:10.881 [2024-11-29 11:54:16.132869] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:10.881 [2024-11-29 11:54:16.133571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.881 [2024-11-29 11:54:16.133723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.881 [2024-11-29 11:54:16.133737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.449 11:54:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.449 11:54:16 -- common/autotest_common.sh@862 -- # return 0 00:10:11.449 11:54:16 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67662 00:10:11.449 11:54:16 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:11.449 11:54:16 -- event/cpu_locks.sh@153 -- # waitforlisten 67662 /var/tmp/spdk2.sock 00:10:11.449 11:54:16 -- common/autotest_common.sh@829 -- # '[' -z 67662 ']' 00:10:11.449 11:54:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:11.449 11:54:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.449 11:54:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:11.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:11.449 11:54:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.449 11:54:16 -- common/autotest_common.sh@10 -- # set +x 00:10:11.449 [2024-11-29 11:54:16.954599] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:11.449 [2024-11-29 11:54:16.955759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67662 ] 00:10:11.737 [2024-11-29 11:54:17.099566] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:11.737 [2024-11-29 11:54:17.099641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:11.995 [2024-11-29 11:54:17.312232] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:11.995 [2024-11-29 11:54:17.313717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.995 [2024-11-29 11:54:17.316653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:11.995 [2024-11-29 11:54:17.316661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.594 11:54:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.594 11:54:17 -- common/autotest_common.sh@862 -- # return 0 00:10:12.594 11:54:17 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:12.594 11:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.594 11:54:17 -- common/autotest_common.sh@10 -- # set +x 00:10:12.594 11:54:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.594 11:54:17 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:12.594 11:54:17 -- common/autotest_common.sh@650 -- # local es=0 00:10:12.594 11:54:17 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:12.594 11:54:18 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:12.594 11:54:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:12.594 11:54:18 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:12.594 11:54:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:12.594 11:54:18 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:12.594 11:54:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.594 11:54:18 -- common/autotest_common.sh@10 -- # set +x 00:10:12.594 [2024-11-29 11:54:18.009701] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67640 has claimed it. 00:10:12.594 request: 00:10:12.594 { 00:10:12.595 "method": "framework_enable_cpumask_locks", 00:10:12.595 "req_id": 1 00:10:12.595 } 00:10:12.595 Got JSON-RPC error response 00:10:12.595 response: 00:10:12.595 { 00:10:12.595 "code": -32603, 00:10:12.595 "message": "Failed to claim CPU core: 2" 00:10:12.595 } 00:10:12.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.595 11:54:18 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:12.595 11:54:18 -- common/autotest_common.sh@653 -- # es=1 00:10:12.595 11:54:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:12.595 11:54:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:12.595 11:54:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:12.595 11:54:18 -- event/cpu_locks.sh@158 -- # waitforlisten 67640 /var/tmp/spdk.sock 00:10:12.595 11:54:18 -- common/autotest_common.sh@829 -- # '[' -z 67640 ']' 00:10:12.595 11:54:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.595 11:54:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.595 11:54:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.595 11:54:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.595 11:54:18 -- common/autotest_common.sh@10 -- # set +x 00:10:12.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:12.853 11:54:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.853 11:54:18 -- common/autotest_common.sh@862 -- # return 0 00:10:12.853 11:54:18 -- event/cpu_locks.sh@159 -- # waitforlisten 67662 /var/tmp/spdk2.sock 00:10:12.853 11:54:18 -- common/autotest_common.sh@829 -- # '[' -z 67662 ']' 00:10:12.853 11:54:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:12.853 11:54:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.853 11:54:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:12.853 11:54:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.853 11:54:18 -- common/autotest_common.sh@10 -- # set +x 00:10:13.111 ************************************ 00:10:13.111 END TEST locking_overlapped_coremask_via_rpc 00:10:13.111 ************************************ 00:10:13.111 11:54:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.111 11:54:18 -- common/autotest_common.sh@862 -- # return 0 00:10:13.111 11:54:18 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:13.111 11:54:18 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:13.111 11:54:18 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:13.112 11:54:18 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:13.112 00:10:13.112 real 0m2.714s 00:10:13.112 user 0m1.419s 00:10:13.112 sys 0m0.217s 00:10:13.112 11:54:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:13.112 11:54:18 -- common/autotest_common.sh@10 -- # set +x 00:10:13.112 11:54:18 -- event/cpu_locks.sh@174 -- # cleanup 00:10:13.112 11:54:18 -- event/cpu_locks.sh@15 -- # [[ -z 67640 ]] 00:10:13.112 11:54:18 -- event/cpu_locks.sh@15 -- # killprocess 67640 00:10:13.112 11:54:18 -- common/autotest_common.sh@936 -- # '[' -z 67640 ']' 00:10:13.112 11:54:18 -- common/autotest_common.sh@940 -- # kill -0 67640 00:10:13.112 11:54:18 -- common/autotest_common.sh@941 -- # uname 00:10:13.112 11:54:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:13.112 11:54:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67640 00:10:13.112 killing process with pid 67640 00:10:13.112 11:54:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:13.112 11:54:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:13.112 11:54:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67640' 00:10:13.112 11:54:18 -- common/autotest_common.sh@955 -- # kill 67640 00:10:13.112 11:54:18 -- common/autotest_common.sh@960 -- # wait 67640 00:10:13.679 11:54:19 -- event/cpu_locks.sh@16 -- # [[ -z 67662 ]] 00:10:13.679 11:54:19 -- event/cpu_locks.sh@16 -- # killprocess 67662 00:10:13.679 11:54:19 -- common/autotest_common.sh@936 -- # '[' -z 67662 ']' 00:10:13.679 11:54:19 -- common/autotest_common.sh@940 -- # kill -0 67662 00:10:13.679 11:54:19 -- common/autotest_common.sh@941 -- # uname 00:10:13.679 11:54:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:13.679 11:54:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67662 00:10:13.938 killing process with pid 67662 00:10:13.938 11:54:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:13.938 11:54:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:13.938 11:54:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67662' 00:10:13.938 11:54:19 -- common/autotest_common.sh@955 -- # kill 67662 00:10:13.938 11:54:19 -- common/autotest_common.sh@960 -- # wait 67662 00:10:14.505 11:54:19 -- event/cpu_locks.sh@18 -- # rm -f 00:10:14.505 11:54:19 -- event/cpu_locks.sh@1 -- # cleanup 00:10:14.505 11:54:19 -- event/cpu_locks.sh@15 -- # [[ -z 67640 ]] 00:10:14.505 11:54:19 -- event/cpu_locks.sh@15 -- # killprocess 67640 00:10:14.505 11:54:19 -- common/autotest_common.sh@936 -- # '[' -z 67640 ']' 00:10:14.505 11:54:19 -- common/autotest_common.sh@940 -- # kill -0 67640 00:10:14.505 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67640) - No such process 00:10:14.505 Process with pid 67640 is not found 00:10:14.505 11:54:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67640 is not found' 00:10:14.505 11:54:19 -- event/cpu_locks.sh@16 -- # [[ -z 67662 ]] 00:10:14.505 11:54:19 -- event/cpu_locks.sh@16 -- # killprocess 67662 00:10:14.505 11:54:19 -- common/autotest_common.sh@936 -- # '[' -z 67662 ']' 00:10:14.505 11:54:19 -- common/autotest_common.sh@940 -- # kill -0 67662 00:10:14.505 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67662) - No such process 00:10:14.505 Process with pid 67662 is not found 00:10:14.505 11:54:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67662 is not found' 00:10:14.505 11:54:19 -- event/cpu_locks.sh@18 -- # rm -f 00:10:14.505 00:10:14.505 real 0m23.211s 00:10:14.505 user 0m39.840s 00:10:14.505 sys 0m6.161s 00:10:14.505 11:54:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:14.505 11:54:19 -- common/autotest_common.sh@10 -- # set +x 00:10:14.505 ************************************ 00:10:14.505 END TEST cpu_locks 00:10:14.505 ************************************ 00:10:14.505 ************************************ 00:10:14.505 END TEST event 00:10:14.505 ************************************ 00:10:14.505 00:10:14.505 real 0m52.363s 00:10:14.505 user 1m39.633s 00:10:14.505 sys 0m10.455s 00:10:14.505 11:54:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:14.505 11:54:19 -- common/autotest_common.sh@10 -- # set +x 00:10:14.505 11:54:19 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:14.505 11:54:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:14.505 11:54:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:14.505 11:54:19 -- common/autotest_common.sh@10 -- # set +x 00:10:14.505 ************************************ 00:10:14.505 START TEST thread 00:10:14.505 ************************************ 00:10:14.505 11:54:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:14.505 * Looking for test storage... 00:10:14.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:14.505 11:54:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:14.505 11:54:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:14.505 11:54:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:14.765 11:54:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:14.765 11:54:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:14.765 11:54:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:14.765 11:54:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:14.765 11:54:20 -- scripts/common.sh@335 -- # IFS=.-: 00:10:14.765 11:54:20 -- scripts/common.sh@335 -- # read -ra ver1 00:10:14.765 11:54:20 -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.765 11:54:20 -- scripts/common.sh@336 -- # read -ra ver2 00:10:14.765 11:54:20 -- scripts/common.sh@337 -- # local 'op=<' 00:10:14.765 11:54:20 -- scripts/common.sh@339 -- # ver1_l=2 00:10:14.765 11:54:20 -- scripts/common.sh@340 -- # ver2_l=1 00:10:14.765 11:54:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:14.765 11:54:20 -- scripts/common.sh@343 -- # case "$op" in 00:10:14.765 11:54:20 -- scripts/common.sh@344 -- # : 1 00:10:14.765 11:54:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:14.765 11:54:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.765 11:54:20 -- scripts/common.sh@364 -- # decimal 1 00:10:14.765 11:54:20 -- scripts/common.sh@352 -- # local d=1 00:10:14.765 11:54:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.765 11:54:20 -- scripts/common.sh@354 -- # echo 1 00:10:14.765 11:54:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:14.765 11:54:20 -- scripts/common.sh@365 -- # decimal 2 00:10:14.765 11:54:20 -- scripts/common.sh@352 -- # local d=2 00:10:14.765 11:54:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.765 11:54:20 -- scripts/common.sh@354 -- # echo 2 00:10:14.765 11:54:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:14.765 11:54:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:14.765 11:54:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:14.765 11:54:20 -- scripts/common.sh@367 -- # return 0 00:10:14.765 11:54:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.765 11:54:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:14.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.765 --rc genhtml_branch_coverage=1 00:10:14.765 --rc genhtml_function_coverage=1 00:10:14.765 --rc genhtml_legend=1 00:10:14.765 --rc geninfo_all_blocks=1 00:10:14.765 --rc geninfo_unexecuted_blocks=1 00:10:14.765 00:10:14.765 ' 00:10:14.765 11:54:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:14.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.765 --rc genhtml_branch_coverage=1 00:10:14.765 --rc genhtml_function_coverage=1 00:10:14.765 --rc genhtml_legend=1 00:10:14.765 --rc geninfo_all_blocks=1 00:10:14.765 --rc geninfo_unexecuted_blocks=1 00:10:14.765 00:10:14.765 ' 00:10:14.765 11:54:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:14.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.765 --rc genhtml_branch_coverage=1 00:10:14.765 --rc genhtml_function_coverage=1 00:10:14.765 --rc genhtml_legend=1 00:10:14.765 --rc geninfo_all_blocks=1 00:10:14.765 --rc geninfo_unexecuted_blocks=1 00:10:14.765 00:10:14.765 ' 00:10:14.765 11:54:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:14.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.765 --rc genhtml_branch_coverage=1 00:10:14.765 --rc genhtml_function_coverage=1 00:10:14.765 --rc genhtml_legend=1 00:10:14.765 --rc geninfo_all_blocks=1 00:10:14.765 --rc geninfo_unexecuted_blocks=1 00:10:14.765 00:10:14.765 ' 00:10:14.765 11:54:20 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:14.765 11:54:20 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:14.765 11:54:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:14.765 11:54:20 -- common/autotest_common.sh@10 -- # set +x 00:10:14.765 ************************************ 00:10:14.765 START TEST thread_poller_perf 00:10:14.765 ************************************ 00:10:14.765 11:54:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:14.765 [2024-11-29 11:54:20.071975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:14.765 [2024-11-29 11:54:20.072868] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67798 ] 00:10:14.765 [2024-11-29 11:54:20.208663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.025 [2024-11-29 11:54:20.337108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.025 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:15.960 [2024-11-29T11:54:21.471Z] ====================================== 00:10:15.960 [2024-11-29T11:54:21.471Z] busy:2214476334 (cyc) 00:10:15.960 [2024-11-29T11:54:21.471Z] total_run_count: 319000 00:10:15.960 [2024-11-29T11:54:21.471Z] tsc_hz: 2200000000 (cyc) 00:10:15.960 [2024-11-29T11:54:21.471Z] ====================================== 00:10:15.960 [2024-11-29T11:54:21.471Z] poller_cost: 6941 (cyc), 3155 (nsec) 00:10:15.960 ************************************ 00:10:15.960 END TEST thread_poller_perf 00:10:15.960 ************************************ 00:10:15.960 00:10:15.960 real 0m1.397s 00:10:15.960 user 0m1.216s 00:10:15.960 sys 0m0.072s 00:10:15.960 11:54:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:15.960 11:54:21 -- common/autotest_common.sh@10 -- # set +x 00:10:16.219 11:54:21 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:16.219 11:54:21 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:16.219 11:54:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:16.219 11:54:21 -- common/autotest_common.sh@10 -- # set +x 00:10:16.219 ************************************ 00:10:16.219 START TEST thread_poller_perf 00:10:16.219 ************************************ 00:10:16.219 11:54:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:16.219 [2024-11-29 11:54:21.520682] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:16.219 [2024-11-29 11:54:21.521001] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67828 ] 00:10:16.219 [2024-11-29 11:54:21.657078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.477 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:16.477 [2024-11-29 11:54:21.778721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.412 [2024-11-29T11:54:22.923Z] ====================================== 00:10:17.412 [2024-11-29T11:54:22.923Z] busy:2203490674 (cyc) 00:10:17.412 [2024-11-29T11:54:22.923Z] total_run_count: 4424000 00:10:17.412 [2024-11-29T11:54:22.923Z] tsc_hz: 2200000000 (cyc) 00:10:17.412 [2024-11-29T11:54:22.923Z] ====================================== 00:10:17.412 [2024-11-29T11:54:22.923Z] poller_cost: 498 (cyc), 226 (nsec) 00:10:17.412 ************************************ 00:10:17.412 END TEST thread_poller_perf 00:10:17.412 ************************************ 00:10:17.412 00:10:17.412 real 0m1.381s 00:10:17.412 user 0m1.202s 00:10:17.412 sys 0m0.071s 00:10:17.412 11:54:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:17.412 11:54:22 -- common/autotest_common.sh@10 -- # set +x 00:10:17.670 11:54:22 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:17.670 ************************************ 00:10:17.670 END TEST thread 00:10:17.670 ************************************ 00:10:17.670 00:10:17.670 real 0m3.074s 00:10:17.670 user 0m2.580s 00:10:17.671 sys 0m0.278s 00:10:17.671 11:54:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:17.671 11:54:22 -- common/autotest_common.sh@10 -- # set +x 00:10:17.671 11:54:22 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:17.671 11:54:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:17.671 11:54:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.671 11:54:22 -- common/autotest_common.sh@10 -- # set +x 00:10:17.671 ************************************ 00:10:17.671 START TEST accel 00:10:17.671 ************************************ 00:10:17.671 11:54:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:17.671 * Looking for test storage... 00:10:17.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:17.671 11:54:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:17.671 11:54:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:17.671 11:54:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:17.671 11:54:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:17.671 11:54:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:17.671 11:54:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:17.671 11:54:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:17.671 11:54:23 -- scripts/common.sh@335 -- # IFS=.-: 00:10:17.671 11:54:23 -- scripts/common.sh@335 -- # read -ra ver1 00:10:17.671 11:54:23 -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.671 11:54:23 -- scripts/common.sh@336 -- # read -ra ver2 00:10:17.671 11:54:23 -- scripts/common.sh@337 -- # local 'op=<' 00:10:17.671 11:54:23 -- scripts/common.sh@339 -- # ver1_l=2 00:10:17.671 11:54:23 -- scripts/common.sh@340 -- # ver2_l=1 00:10:17.671 11:54:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:17.671 11:54:23 -- scripts/common.sh@343 -- # case "$op" in 00:10:17.671 11:54:23 -- scripts/common.sh@344 -- # : 1 00:10:17.671 11:54:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:17.671 11:54:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.671 11:54:23 -- scripts/common.sh@364 -- # decimal 1 00:10:17.671 11:54:23 -- scripts/common.sh@352 -- # local d=1 00:10:17.671 11:54:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.671 11:54:23 -- scripts/common.sh@354 -- # echo 1 00:10:17.671 11:54:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:17.671 11:54:23 -- scripts/common.sh@365 -- # decimal 2 00:10:17.671 11:54:23 -- scripts/common.sh@352 -- # local d=2 00:10:17.671 11:54:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.671 11:54:23 -- scripts/common.sh@354 -- # echo 2 00:10:17.671 11:54:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:17.671 11:54:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:17.671 11:54:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:17.671 11:54:23 -- scripts/common.sh@367 -- # return 0 00:10:17.671 11:54:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.671 11:54:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:17.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.671 --rc genhtml_branch_coverage=1 00:10:17.671 --rc genhtml_function_coverage=1 00:10:17.671 --rc genhtml_legend=1 00:10:17.671 --rc geninfo_all_blocks=1 00:10:17.671 --rc geninfo_unexecuted_blocks=1 00:10:17.671 00:10:17.671 ' 00:10:17.671 11:54:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:17.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.671 --rc genhtml_branch_coverage=1 00:10:17.671 --rc genhtml_function_coverage=1 00:10:17.671 --rc genhtml_legend=1 00:10:17.671 --rc geninfo_all_blocks=1 00:10:17.671 --rc geninfo_unexecuted_blocks=1 00:10:17.671 00:10:17.671 ' 00:10:17.671 11:54:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:17.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.671 --rc genhtml_branch_coverage=1 00:10:17.671 --rc genhtml_function_coverage=1 00:10:17.671 --rc genhtml_legend=1 00:10:17.671 --rc geninfo_all_blocks=1 00:10:17.671 --rc geninfo_unexecuted_blocks=1 00:10:17.671 00:10:17.671 ' 00:10:17.671 11:54:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:17.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.671 --rc genhtml_branch_coverage=1 00:10:17.671 --rc genhtml_function_coverage=1 00:10:17.671 --rc genhtml_legend=1 00:10:17.671 --rc geninfo_all_blocks=1 00:10:17.671 --rc geninfo_unexecuted_blocks=1 00:10:17.671 00:10:17.671 ' 00:10:17.671 11:54:23 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:10:17.671 11:54:23 -- accel/accel.sh@74 -- # get_expected_opcs 00:10:17.671 11:54:23 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:17.671 11:54:23 -- accel/accel.sh@59 -- # spdk_tgt_pid=67914 00:10:17.671 11:54:23 -- accel/accel.sh@60 -- # waitforlisten 67914 00:10:17.671 11:54:23 -- common/autotest_common.sh@829 -- # '[' -z 67914 ']' 00:10:17.671 11:54:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.671 11:54:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:17.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.671 11:54:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.671 11:54:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:17.671 11:54:23 -- accel/accel.sh@58 -- # build_accel_config 00:10:17.671 11:54:23 -- common/autotest_common.sh@10 -- # set +x 00:10:17.671 11:54:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:17.671 11:54:23 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:17.671 11:54:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:17.671 11:54:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:17.671 11:54:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:17.671 11:54:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:17.671 11:54:23 -- accel/accel.sh@41 -- # local IFS=, 00:10:17.671 11:54:23 -- accel/accel.sh@42 -- # jq -r . 00:10:17.929 [2024-11-29 11:54:23.229278] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:17.929 [2024-11-29 11:54:23.229883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67914 ] 00:10:17.929 [2024-11-29 11:54:23.364703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.186 [2024-11-29 11:54:23.481613] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:18.186 [2024-11-29 11:54:23.482110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.749 11:54:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:18.749 11:54:24 -- common/autotest_common.sh@862 -- # return 0 00:10:18.749 11:54:24 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:18.749 11:54:24 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:10:18.749 11:54:24 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:18.749 11:54:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.749 11:54:24 -- common/autotest_common.sh@10 -- # set +x 00:10:18.749 11:54:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.006 11:54:24 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # IFS== 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # read -r opc module 00:10:19.006 11:54:24 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:19.006 11:54:24 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # IFS== 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # read -r opc module 00:10:19.006 11:54:24 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:19.006 11:54:24 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # IFS== 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # read -r opc module 00:10:19.006 11:54:24 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:19.006 11:54:24 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # IFS== 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # read -r opc module 00:10:19.006 11:54:24 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:19.006 11:54:24 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # IFS== 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # read -r opc module 00:10:19.006 11:54:24 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:19.006 11:54:24 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # IFS== 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # read -r opc module 00:10:19.006 11:54:24 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:19.006 11:54:24 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # IFS== 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # read -r opc module 00:10:19.006 11:54:24 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:19.006 11:54:24 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # IFS== 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # read -r opc module 00:10:19.006 11:54:24 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:19.006 11:54:24 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # IFS== 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # read -r opc module 00:10:19.006 11:54:24 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:19.006 11:54:24 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # IFS== 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # read -r opc module 00:10:19.006 11:54:24 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:19.006 11:54:24 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # IFS== 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # read -r opc module 00:10:19.006 11:54:24 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:19.006 11:54:24 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # IFS== 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # read -r opc module 00:10:19.006 11:54:24 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:19.006 11:54:24 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # IFS== 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # read -r opc module 00:10:19.006 11:54:24 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:19.006 11:54:24 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # IFS== 00:10:19.006 11:54:24 -- accel/accel.sh@64 -- # read -r opc module 00:10:19.006 11:54:24 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:19.006 11:54:24 -- accel/accel.sh@67 -- # killprocess 67914 00:10:19.006 11:54:24 -- common/autotest_common.sh@936 -- # '[' -z 67914 ']' 00:10:19.007 11:54:24 -- common/autotest_common.sh@940 -- # kill -0 67914 00:10:19.007 11:54:24 -- common/autotest_common.sh@941 -- # uname 00:10:19.007 11:54:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:19.007 11:54:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67914 00:10:19.007 11:54:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:19.007 11:54:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:19.007 11:54:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67914' 00:10:19.007 killing process with pid 67914 00:10:19.007 11:54:24 -- common/autotest_common.sh@955 -- # kill 67914 00:10:19.007 11:54:24 -- common/autotest_common.sh@960 -- # wait 67914 00:10:19.572 11:54:24 -- accel/accel.sh@68 -- # trap - ERR 00:10:19.572 11:54:24 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:10:19.572 11:54:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:19.572 11:54:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:19.572 11:54:24 -- common/autotest_common.sh@10 -- # set +x 00:10:19.572 11:54:24 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:10:19.572 11:54:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:19.572 11:54:24 -- accel/accel.sh@12 -- # build_accel_config 00:10:19.572 11:54:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:19.572 11:54:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:19.572 11:54:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:19.572 11:54:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:19.572 11:54:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:19.572 11:54:24 -- accel/accel.sh@41 -- # local IFS=, 00:10:19.572 11:54:24 -- accel/accel.sh@42 -- # jq -r . 00:10:19.572 11:54:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:19.572 11:54:24 -- common/autotest_common.sh@10 -- # set +x 00:10:19.572 11:54:24 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:19.572 11:54:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:19.572 11:54:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:19.572 11:54:24 -- common/autotest_common.sh@10 -- # set +x 00:10:19.572 ************************************ 00:10:19.572 START TEST accel_missing_filename 00:10:19.572 ************************************ 00:10:19.572 11:54:24 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:10:19.572 11:54:24 -- common/autotest_common.sh@650 -- # local es=0 00:10:19.572 11:54:24 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:19.572 11:54:24 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:19.572 11:54:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.572 11:54:24 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:19.572 11:54:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.572 11:54:24 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:10:19.572 11:54:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:19.572 11:54:24 -- accel/accel.sh@12 -- # build_accel_config 00:10:19.572 11:54:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:19.572 11:54:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:19.572 11:54:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:19.572 11:54:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:19.572 11:54:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:19.572 11:54:24 -- accel/accel.sh@41 -- # local IFS=, 00:10:19.572 11:54:24 -- accel/accel.sh@42 -- # jq -r . 00:10:19.572 [2024-11-29 11:54:24.938697] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:19.572 [2024-11-29 11:54:24.939184] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67969 ] 00:10:19.572 [2024-11-29 11:54:25.080041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.838 [2024-11-29 11:54:25.213045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.838 [2024-11-29 11:54:25.291193] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:20.133 [2024-11-29 11:54:25.404776] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:20.133 A filename is required. 00:10:20.133 11:54:25 -- common/autotest_common.sh@653 -- # es=234 00:10:20.133 11:54:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:20.133 11:54:25 -- common/autotest_common.sh@662 -- # es=106 00:10:20.133 11:54:25 -- common/autotest_common.sh@663 -- # case "$es" in 00:10:20.133 11:54:25 -- common/autotest_common.sh@670 -- # es=1 00:10:20.133 ************************************ 00:10:20.133 END TEST accel_missing_filename 00:10:20.133 ************************************ 00:10:20.133 11:54:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:20.133 00:10:20.133 real 0m0.598s 00:10:20.133 user 0m0.387s 00:10:20.133 sys 0m0.155s 00:10:20.133 11:54:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:20.133 11:54:25 -- common/autotest_common.sh@10 -- # set +x 00:10:20.133 11:54:25 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:20.133 11:54:25 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:10:20.133 11:54:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:20.133 11:54:25 -- common/autotest_common.sh@10 -- # set +x 00:10:20.133 ************************************ 00:10:20.133 START TEST accel_compress_verify 00:10:20.134 ************************************ 00:10:20.134 11:54:25 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:20.134 11:54:25 -- common/autotest_common.sh@650 -- # local es=0 00:10:20.134 11:54:25 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:20.134 11:54:25 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:20.134 11:54:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:20.134 11:54:25 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:20.134 11:54:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:20.134 11:54:25 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:20.134 11:54:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:20.134 11:54:25 -- accel/accel.sh@12 -- # build_accel_config 00:10:20.134 11:54:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:20.134 11:54:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:20.134 11:54:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:20.134 11:54:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:20.134 11:54:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:20.134 11:54:25 -- accel/accel.sh@41 -- # local IFS=, 00:10:20.134 11:54:25 -- accel/accel.sh@42 -- # jq -r . 00:10:20.134 [2024-11-29 11:54:25.585417] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:20.134 [2024-11-29 11:54:25.585536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67993 ] 00:10:20.391 [2024-11-29 11:54:25.716395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.391 [2024-11-29 11:54:25.849595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.648 [2024-11-29 11:54:25.930994] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:20.648 [2024-11-29 11:54:26.040271] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:20.648 00:10:20.648 Compression does not support the verify option, aborting. 00:10:20.648 11:54:26 -- common/autotest_common.sh@653 -- # es=161 00:10:20.648 11:54:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:20.648 11:54:26 -- common/autotest_common.sh@662 -- # es=33 00:10:20.648 11:54:26 -- common/autotest_common.sh@663 -- # case "$es" in 00:10:20.648 11:54:26 -- common/autotest_common.sh@670 -- # es=1 00:10:20.648 11:54:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:20.648 00:10:20.648 real 0m0.593s 00:10:20.648 user 0m0.388s 00:10:20.648 sys 0m0.149s 00:10:20.648 11:54:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:20.648 11:54:26 -- common/autotest_common.sh@10 -- # set +x 00:10:20.648 ************************************ 00:10:20.648 END TEST accel_compress_verify 00:10:20.648 ************************************ 00:10:20.906 11:54:26 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:20.906 11:54:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:20.906 11:54:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:20.906 11:54:26 -- common/autotest_common.sh@10 -- # set +x 00:10:20.906 ************************************ 00:10:20.906 START TEST accel_wrong_workload 00:10:20.906 ************************************ 00:10:20.906 11:54:26 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:10:20.906 11:54:26 -- common/autotest_common.sh@650 -- # local es=0 00:10:20.906 11:54:26 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:20.906 11:54:26 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:20.906 11:54:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:20.906 11:54:26 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:20.906 11:54:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:20.906 11:54:26 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:10:20.906 11:54:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:20.906 11:54:26 -- accel/accel.sh@12 -- # build_accel_config 00:10:20.906 11:54:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:20.906 11:54:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:20.906 11:54:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:20.906 11:54:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:20.906 11:54:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:20.906 11:54:26 -- accel/accel.sh@41 -- # local IFS=, 00:10:20.906 11:54:26 -- accel/accel.sh@42 -- # jq -r . 00:10:20.906 Unsupported workload type: foobar 00:10:20.906 [2024-11-29 11:54:26.232710] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:20.906 accel_perf options: 00:10:20.906 [-h help message] 00:10:20.906 [-q queue depth per core] 00:10:20.906 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:20.906 [-T number of threads per core 00:10:20.906 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:20.906 [-t time in seconds] 00:10:20.906 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:20.906 [ dif_verify, , dif_generate, dif_generate_copy 00:10:20.906 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:20.906 [-l for compress/decompress workloads, name of uncompressed input file 00:10:20.906 [-S for crc32c workload, use this seed value (default 0) 00:10:20.906 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:20.906 [-f for fill workload, use this BYTE value (default 255) 00:10:20.906 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:20.906 [-y verify result if this switch is on] 00:10:20.906 [-a tasks to allocate per core (default: same value as -q)] 00:10:20.906 Can be used to spread operations across a wider range of memory. 00:10:20.906 ************************************ 00:10:20.906 END TEST accel_wrong_workload 00:10:20.906 ************************************ 00:10:20.906 11:54:26 -- common/autotest_common.sh@653 -- # es=1 00:10:20.906 11:54:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:20.906 11:54:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:20.906 11:54:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:20.906 00:10:20.906 real 0m0.034s 00:10:20.906 user 0m0.026s 00:10:20.906 sys 0m0.007s 00:10:20.906 11:54:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:20.906 11:54:26 -- common/autotest_common.sh@10 -- # set +x 00:10:20.906 11:54:26 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:20.906 11:54:26 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:10:20.906 11:54:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:20.906 11:54:26 -- common/autotest_common.sh@10 -- # set +x 00:10:20.906 ************************************ 00:10:20.906 START TEST accel_negative_buffers 00:10:20.906 ************************************ 00:10:20.906 11:54:26 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:20.906 11:54:26 -- common/autotest_common.sh@650 -- # local es=0 00:10:20.906 11:54:26 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:20.906 11:54:26 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:20.906 11:54:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:20.906 11:54:26 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:20.906 11:54:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:20.906 11:54:26 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:10:20.906 11:54:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:20.906 11:54:26 -- accel/accel.sh@12 -- # build_accel_config 00:10:20.906 11:54:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:20.906 11:54:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:20.906 11:54:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:20.906 11:54:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:20.906 11:54:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:20.906 11:54:26 -- accel/accel.sh@41 -- # local IFS=, 00:10:20.906 11:54:26 -- accel/accel.sh@42 -- # jq -r . 00:10:20.906 -x option must be non-negative. 00:10:20.906 [2024-11-29 11:54:26.311381] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:20.906 accel_perf options: 00:10:20.906 [-h help message] 00:10:20.906 [-q queue depth per core] 00:10:20.906 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:20.906 [-T number of threads per core 00:10:20.906 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:20.906 [-t time in seconds] 00:10:20.906 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:20.906 [ dif_verify, , dif_generate, dif_generate_copy 00:10:20.906 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:20.906 [-l for compress/decompress workloads, name of uncompressed input file 00:10:20.906 [-S for crc32c workload, use this seed value (default 0) 00:10:20.906 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:20.906 [-f for fill workload, use this BYTE value (default 255) 00:10:20.906 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:20.906 [-y verify result if this switch is on] 00:10:20.906 [-a tasks to allocate per core (default: same value as -q)] 00:10:20.906 Can be used to spread operations across a wider range of memory. 00:10:20.906 ************************************ 00:10:20.906 END TEST accel_negative_buffers 00:10:20.906 ************************************ 00:10:20.906 11:54:26 -- common/autotest_common.sh@653 -- # es=1 00:10:20.906 11:54:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:20.906 11:54:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:20.906 11:54:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:20.906 00:10:20.906 real 0m0.026s 00:10:20.906 user 0m0.013s 00:10:20.906 sys 0m0.013s 00:10:20.906 11:54:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:20.906 11:54:26 -- common/autotest_common.sh@10 -- # set +x 00:10:20.906 11:54:26 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:20.906 11:54:26 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:20.906 11:54:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:20.906 11:54:26 -- common/autotest_common.sh@10 -- # set +x 00:10:20.906 ************************************ 00:10:20.906 START TEST accel_crc32c 00:10:20.906 ************************************ 00:10:20.906 11:54:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:20.906 11:54:26 -- accel/accel.sh@16 -- # local accel_opc 00:10:20.906 11:54:26 -- accel/accel.sh@17 -- # local accel_module 00:10:20.906 11:54:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:20.906 11:54:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:20.906 11:54:26 -- accel/accel.sh@12 -- # build_accel_config 00:10:20.906 11:54:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:20.906 11:54:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:20.906 11:54:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:20.906 11:54:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:20.906 11:54:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:20.906 11:54:26 -- accel/accel.sh@41 -- # local IFS=, 00:10:20.906 11:54:26 -- accel/accel.sh@42 -- # jq -r . 00:10:20.906 [2024-11-29 11:54:26.383259] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:20.906 [2024-11-29 11:54:26.383645] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68057 ] 00:10:21.163 [2024-11-29 11:54:26.515040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.163 [2024-11-29 11:54:26.649275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.537 11:54:27 -- accel/accel.sh@18 -- # out=' 00:10:22.537 SPDK Configuration: 00:10:22.537 Core mask: 0x1 00:10:22.537 00:10:22.537 Accel Perf Configuration: 00:10:22.537 Workload Type: crc32c 00:10:22.537 CRC-32C seed: 32 00:10:22.537 Transfer size: 4096 bytes 00:10:22.537 Vector count 1 00:10:22.537 Module: software 00:10:22.537 Queue depth: 32 00:10:22.537 Allocate depth: 32 00:10:22.537 # threads/core: 1 00:10:22.537 Run time: 1 seconds 00:10:22.537 Verify: Yes 00:10:22.537 00:10:22.537 Running for 1 seconds... 00:10:22.537 00:10:22.537 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:22.537 ------------------------------------------------------------------------------------ 00:10:22.537 0,0 472640/s 1846 MiB/s 0 0 00:10:22.537 ==================================================================================== 00:10:22.537 Total 472640/s 1846 MiB/s 0 0' 00:10:22.537 11:54:27 -- accel/accel.sh@20 -- # IFS=: 00:10:22.537 11:54:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:22.537 11:54:27 -- accel/accel.sh@20 -- # read -r var val 00:10:22.537 11:54:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:22.537 11:54:27 -- accel/accel.sh@12 -- # build_accel_config 00:10:22.537 11:54:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:22.537 11:54:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:22.537 11:54:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:22.537 11:54:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:22.537 11:54:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:22.537 11:54:27 -- accel/accel.sh@41 -- # local IFS=, 00:10:22.537 11:54:27 -- accel/accel.sh@42 -- # jq -r . 00:10:22.537 [2024-11-29 11:54:27.987992] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:22.537 [2024-11-29 11:54:27.988100] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68077 ] 00:10:22.796 [2024-11-29 11:54:28.119456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.796 [2024-11-29 11:54:28.236405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.055 11:54:28 -- accel/accel.sh@21 -- # val= 00:10:23.055 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.055 11:54:28 -- accel/accel.sh@21 -- # val= 00:10:23.055 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.055 11:54:28 -- accel/accel.sh@21 -- # val=0x1 00:10:23.055 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.055 11:54:28 -- accel/accel.sh@21 -- # val= 00:10:23.055 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.055 11:54:28 -- accel/accel.sh@21 -- # val= 00:10:23.055 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.055 11:54:28 -- accel/accel.sh@21 -- # val=crc32c 00:10:23.055 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.055 11:54:28 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.055 11:54:28 -- accel/accel.sh@21 -- # val=32 00:10:23.055 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.055 11:54:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:23.055 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.055 11:54:28 -- accel/accel.sh@21 -- # val= 00:10:23.055 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.055 11:54:28 -- accel/accel.sh@21 -- # val=software 00:10:23.055 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.055 11:54:28 -- accel/accel.sh@23 -- # accel_module=software 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.055 11:54:28 -- accel/accel.sh@21 -- # val=32 00:10:23.055 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.055 11:54:28 -- accel/accel.sh@21 -- # val=32 00:10:23.055 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.055 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.055 11:54:28 -- accel/accel.sh@21 -- # val=1 00:10:23.056 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.056 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.056 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.056 11:54:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:23.056 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.056 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.056 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.056 11:54:28 -- accel/accel.sh@21 -- # val=Yes 00:10:23.056 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.056 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.056 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.056 11:54:28 -- accel/accel.sh@21 -- # val= 00:10:23.056 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.056 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.056 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:23.056 11:54:28 -- accel/accel.sh@21 -- # val= 00:10:23.056 11:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.056 11:54:28 -- accel/accel.sh@20 -- # IFS=: 00:10:23.056 11:54:28 -- accel/accel.sh@20 -- # read -r var val 00:10:24.432 11:54:29 -- accel/accel.sh@21 -- # val= 00:10:24.432 11:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.433 11:54:29 -- accel/accel.sh@20 -- # IFS=: 00:10:24.433 11:54:29 -- accel/accel.sh@20 -- # read -r var val 00:10:24.433 11:54:29 -- accel/accel.sh@21 -- # val= 00:10:24.433 11:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.433 11:54:29 -- accel/accel.sh@20 -- # IFS=: 00:10:24.433 11:54:29 -- accel/accel.sh@20 -- # read -r var val 00:10:24.433 11:54:29 -- accel/accel.sh@21 -- # val= 00:10:24.433 11:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.433 11:54:29 -- accel/accel.sh@20 -- # IFS=: 00:10:24.433 11:54:29 -- accel/accel.sh@20 -- # read -r var val 00:10:24.433 11:54:29 -- accel/accel.sh@21 -- # val= 00:10:24.433 11:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.433 11:54:29 -- accel/accel.sh@20 -- # IFS=: 00:10:24.433 11:54:29 -- accel/accel.sh@20 -- # read -r var val 00:10:24.433 11:54:29 -- accel/accel.sh@21 -- # val= 00:10:24.433 11:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.433 11:54:29 -- accel/accel.sh@20 -- # IFS=: 00:10:24.433 11:54:29 -- accel/accel.sh@20 -- # read -r var val 00:10:24.433 11:54:29 -- accel/accel.sh@21 -- # val= 00:10:24.433 11:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.433 11:54:29 -- accel/accel.sh@20 -- # IFS=: 00:10:24.433 11:54:29 -- accel/accel.sh@20 -- # read -r var val 00:10:24.433 11:54:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:24.433 11:54:29 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:24.433 11:54:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:24.433 00:10:24.433 real 0m3.198s 00:10:24.433 user 0m2.688s 00:10:24.433 sys 0m0.306s 00:10:24.433 11:54:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:24.433 ************************************ 00:10:24.433 END TEST accel_crc32c 00:10:24.433 ************************************ 00:10:24.433 11:54:29 -- common/autotest_common.sh@10 -- # set +x 00:10:24.433 11:54:29 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:24.433 11:54:29 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:24.433 11:54:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:24.433 11:54:29 -- common/autotest_common.sh@10 -- # set +x 00:10:24.433 ************************************ 00:10:24.433 START TEST accel_crc32c_C2 00:10:24.433 ************************************ 00:10:24.433 11:54:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:24.433 11:54:29 -- accel/accel.sh@16 -- # local accel_opc 00:10:24.433 11:54:29 -- accel/accel.sh@17 -- # local accel_module 00:10:24.433 11:54:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:24.433 11:54:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:24.433 11:54:29 -- accel/accel.sh@12 -- # build_accel_config 00:10:24.433 11:54:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:24.433 11:54:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:24.433 11:54:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:24.433 11:54:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:24.433 11:54:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:24.433 11:54:29 -- accel/accel.sh@41 -- # local IFS=, 00:10:24.433 11:54:29 -- accel/accel.sh@42 -- # jq -r . 00:10:24.433 [2024-11-29 11:54:29.637111] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:24.433 [2024-11-29 11:54:29.637219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68111 ] 00:10:24.433 [2024-11-29 11:54:29.770197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.433 [2024-11-29 11:54:29.888916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.809 11:54:31 -- accel/accel.sh@18 -- # out=' 00:10:25.809 SPDK Configuration: 00:10:25.809 Core mask: 0x1 00:10:25.809 00:10:25.809 Accel Perf Configuration: 00:10:25.809 Workload Type: crc32c 00:10:25.809 CRC-32C seed: 0 00:10:25.809 Transfer size: 4096 bytes 00:10:25.809 Vector count 2 00:10:25.809 Module: software 00:10:25.809 Queue depth: 32 00:10:25.809 Allocate depth: 32 00:10:25.809 # threads/core: 1 00:10:25.809 Run time: 1 seconds 00:10:25.809 Verify: Yes 00:10:25.809 00:10:25.809 Running for 1 seconds... 00:10:25.809 00:10:25.809 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:25.810 ------------------------------------------------------------------------------------ 00:10:25.810 0,0 357312/s 2791 MiB/s 0 0 00:10:25.810 ==================================================================================== 00:10:25.810 Total 357312/s 1395 MiB/s 0 0' 00:10:25.810 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:25.810 11:54:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:25.810 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:25.810 11:54:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:25.810 11:54:31 -- accel/accel.sh@12 -- # build_accel_config 00:10:25.810 11:54:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:25.810 11:54:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.810 11:54:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.810 11:54:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:25.810 11:54:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:25.810 11:54:31 -- accel/accel.sh@41 -- # local IFS=, 00:10:25.810 11:54:31 -- accel/accel.sh@42 -- # jq -r . 00:10:25.810 [2024-11-29 11:54:31.260667] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:25.810 [2024-11-29 11:54:31.260782] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68131 ] 00:10:26.069 [2024-11-29 11:54:31.395939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.069 [2024-11-29 11:54:31.517171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val= 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val= 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val=0x1 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val= 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val= 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val=crc32c 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val=0 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val= 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val=software 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@23 -- # accel_module=software 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val=32 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val=32 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val=1 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val=Yes 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val= 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:26.327 11:54:31 -- accel/accel.sh@21 -- # val= 00:10:26.327 11:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # IFS=: 00:10:26.327 11:54:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.705 11:54:32 -- accel/accel.sh@21 -- # val= 00:10:27.705 11:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.705 11:54:32 -- accel/accel.sh@20 -- # IFS=: 00:10:27.705 11:54:32 -- accel/accel.sh@20 -- # read -r var val 00:10:27.705 11:54:32 -- accel/accel.sh@21 -- # val= 00:10:27.705 11:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.705 11:54:32 -- accel/accel.sh@20 -- # IFS=: 00:10:27.705 11:54:32 -- accel/accel.sh@20 -- # read -r var val 00:10:27.705 11:54:32 -- accel/accel.sh@21 -- # val= 00:10:27.705 11:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.705 11:54:32 -- accel/accel.sh@20 -- # IFS=: 00:10:27.705 11:54:32 -- accel/accel.sh@20 -- # read -r var val 00:10:27.705 11:54:32 -- accel/accel.sh@21 -- # val= 00:10:27.705 11:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.705 11:54:32 -- accel/accel.sh@20 -- # IFS=: 00:10:27.705 11:54:32 -- accel/accel.sh@20 -- # read -r var val 00:10:27.705 11:54:32 -- accel/accel.sh@21 -- # val= 00:10:27.705 11:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.705 11:54:32 -- accel/accel.sh@20 -- # IFS=: 00:10:27.705 11:54:32 -- accel/accel.sh@20 -- # read -r var val 00:10:27.705 ************************************ 00:10:27.705 END TEST accel_crc32c_C2 00:10:27.705 ************************************ 00:10:27.705 11:54:32 -- accel/accel.sh@21 -- # val= 00:10:27.705 11:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.705 11:54:32 -- accel/accel.sh@20 -- # IFS=: 00:10:27.705 11:54:32 -- accel/accel.sh@20 -- # read -r var val 00:10:27.705 11:54:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:27.705 11:54:32 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:27.705 11:54:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:27.705 00:10:27.705 real 0m3.264s 00:10:27.705 user 0m2.757s 00:10:27.705 sys 0m0.304s 00:10:27.705 11:54:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:27.705 11:54:32 -- common/autotest_common.sh@10 -- # set +x 00:10:27.705 11:54:32 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:27.705 11:54:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:27.705 11:54:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:27.705 11:54:32 -- common/autotest_common.sh@10 -- # set +x 00:10:27.705 ************************************ 00:10:27.705 START TEST accel_copy 00:10:27.705 ************************************ 00:10:27.705 11:54:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:10:27.705 11:54:32 -- accel/accel.sh@16 -- # local accel_opc 00:10:27.705 11:54:32 -- accel/accel.sh@17 -- # local accel_module 00:10:27.705 11:54:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:27.705 11:54:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:27.705 11:54:32 -- accel/accel.sh@12 -- # build_accel_config 00:10:27.705 11:54:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:27.705 11:54:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.705 11:54:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.705 11:54:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:27.705 11:54:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:27.705 11:54:32 -- accel/accel.sh@41 -- # local IFS=, 00:10:27.705 11:54:32 -- accel/accel.sh@42 -- # jq -r . 00:10:27.705 [2024-11-29 11:54:32.959188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:27.705 [2024-11-29 11:54:32.959446] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68166 ] 00:10:27.705 [2024-11-29 11:54:33.097101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.705 [2024-11-29 11:54:33.207058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.098 11:54:34 -- accel/accel.sh@18 -- # out=' 00:10:29.098 SPDK Configuration: 00:10:29.098 Core mask: 0x1 00:10:29.098 00:10:29.098 Accel Perf Configuration: 00:10:29.098 Workload Type: copy 00:10:29.098 Transfer size: 4096 bytes 00:10:29.098 Vector count 1 00:10:29.098 Module: software 00:10:29.098 Queue depth: 32 00:10:29.098 Allocate depth: 32 00:10:29.098 # threads/core: 1 00:10:29.098 Run time: 1 seconds 00:10:29.098 Verify: Yes 00:10:29.098 00:10:29.098 Running for 1 seconds... 00:10:29.098 00:10:29.098 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:29.098 ------------------------------------------------------------------------------------ 00:10:29.098 0,0 335424/s 1310 MiB/s 0 0 00:10:29.098 ==================================================================================== 00:10:29.098 Total 335424/s 1310 MiB/s 0 0' 00:10:29.098 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.098 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.098 11:54:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:29.098 11:54:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:29.098 11:54:34 -- accel/accel.sh@12 -- # build_accel_config 00:10:29.098 11:54:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:29.098 11:54:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.098 11:54:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.098 11:54:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:29.098 11:54:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:29.098 11:54:34 -- accel/accel.sh@41 -- # local IFS=, 00:10:29.098 11:54:34 -- accel/accel.sh@42 -- # jq -r . 00:10:29.098 [2024-11-29 11:54:34.548582] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:29.098 [2024-11-29 11:54:34.548799] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68186 ] 00:10:29.357 [2024-11-29 11:54:34.680607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.357 [2024-11-29 11:54:34.806970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.615 11:54:34 -- accel/accel.sh@21 -- # val= 00:10:29.615 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.615 11:54:34 -- accel/accel.sh@21 -- # val= 00:10:29.615 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.615 11:54:34 -- accel/accel.sh@21 -- # val=0x1 00:10:29.615 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.615 11:54:34 -- accel/accel.sh@21 -- # val= 00:10:29.615 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.615 11:54:34 -- accel/accel.sh@21 -- # val= 00:10:29.615 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.615 11:54:34 -- accel/accel.sh@21 -- # val=copy 00:10:29.615 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.615 11:54:34 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.615 11:54:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:29.615 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.615 11:54:34 -- accel/accel.sh@21 -- # val= 00:10:29.615 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.615 11:54:34 -- accel/accel.sh@21 -- # val=software 00:10:29.615 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.615 11:54:34 -- accel/accel.sh@23 -- # accel_module=software 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.615 11:54:34 -- accel/accel.sh@21 -- # val=32 00:10:29.615 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.615 11:54:34 -- accel/accel.sh@21 -- # val=32 00:10:29.615 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.615 11:54:34 -- accel/accel.sh@21 -- # val=1 00:10:29.615 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.615 11:54:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:29.615 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.615 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.615 11:54:34 -- accel/accel.sh@21 -- # val=Yes 00:10:29.616 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.616 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.616 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.616 11:54:34 -- accel/accel.sh@21 -- # val= 00:10:29.616 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.616 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.616 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:29.616 11:54:34 -- accel/accel.sh@21 -- # val= 00:10:29.616 11:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.616 11:54:34 -- accel/accel.sh@20 -- # IFS=: 00:10:29.616 11:54:34 -- accel/accel.sh@20 -- # read -r var val 00:10:30.992 11:54:36 -- accel/accel.sh@21 -- # val= 00:10:30.992 11:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.992 11:54:36 -- accel/accel.sh@20 -- # IFS=: 00:10:30.992 11:54:36 -- accel/accel.sh@20 -- # read -r var val 00:10:30.992 11:54:36 -- accel/accel.sh@21 -- # val= 00:10:30.992 11:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.992 11:54:36 -- accel/accel.sh@20 -- # IFS=: 00:10:30.992 11:54:36 -- accel/accel.sh@20 -- # read -r var val 00:10:30.992 11:54:36 -- accel/accel.sh@21 -- # val= 00:10:30.992 11:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.992 11:54:36 -- accel/accel.sh@20 -- # IFS=: 00:10:30.992 11:54:36 -- accel/accel.sh@20 -- # read -r var val 00:10:30.992 11:54:36 -- accel/accel.sh@21 -- # val= 00:10:30.992 11:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.992 11:54:36 -- accel/accel.sh@20 -- # IFS=: 00:10:30.992 11:54:36 -- accel/accel.sh@20 -- # read -r var val 00:10:30.992 11:54:36 -- accel/accel.sh@21 -- # val= 00:10:30.992 11:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.992 11:54:36 -- accel/accel.sh@20 -- # IFS=: 00:10:30.992 11:54:36 -- accel/accel.sh@20 -- # read -r var val 00:10:30.992 11:54:36 -- accel/accel.sh@21 -- # val= 00:10:30.992 11:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.992 11:54:36 -- accel/accel.sh@20 -- # IFS=: 00:10:30.992 11:54:36 -- accel/accel.sh@20 -- # read -r var val 00:10:30.992 11:54:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:30.992 11:54:36 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:30.992 11:54:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:30.992 00:10:30.992 real 0m3.197s 00:10:30.992 user 0m2.681s 00:10:30.992 sys 0m0.307s 00:10:30.992 11:54:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:30.992 11:54:36 -- common/autotest_common.sh@10 -- # set +x 00:10:30.992 ************************************ 00:10:30.992 END TEST accel_copy 00:10:30.992 ************************************ 00:10:30.992 11:54:36 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:30.992 11:54:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:30.992 11:54:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:30.992 11:54:36 -- common/autotest_common.sh@10 -- # set +x 00:10:30.992 ************************************ 00:10:30.992 START TEST accel_fill 00:10:30.992 ************************************ 00:10:30.992 11:54:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:30.992 11:54:36 -- accel/accel.sh@16 -- # local accel_opc 00:10:30.992 11:54:36 -- accel/accel.sh@17 -- # local accel_module 00:10:30.992 11:54:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:30.992 11:54:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:30.992 11:54:36 -- accel/accel.sh@12 -- # build_accel_config 00:10:30.992 11:54:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:30.992 11:54:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.992 11:54:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.992 11:54:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:30.992 11:54:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:30.992 11:54:36 -- accel/accel.sh@41 -- # local IFS=, 00:10:30.992 11:54:36 -- accel/accel.sh@42 -- # jq -r . 00:10:30.992 [2024-11-29 11:54:36.217663] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:30.992 [2024-11-29 11:54:36.217807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68226 ] 00:10:30.992 [2024-11-29 11:54:36.358354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.992 [2024-11-29 11:54:36.496084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.368 11:54:37 -- accel/accel.sh@18 -- # out=' 00:10:32.368 SPDK Configuration: 00:10:32.368 Core mask: 0x1 00:10:32.368 00:10:32.368 Accel Perf Configuration: 00:10:32.368 Workload Type: fill 00:10:32.368 Fill pattern: 0x80 00:10:32.368 Transfer size: 4096 bytes 00:10:32.368 Vector count 1 00:10:32.368 Module: software 00:10:32.368 Queue depth: 64 00:10:32.368 Allocate depth: 64 00:10:32.368 # threads/core: 1 00:10:32.368 Run time: 1 seconds 00:10:32.368 Verify: Yes 00:10:32.368 00:10:32.368 Running for 1 seconds... 00:10:32.368 00:10:32.368 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:32.368 ------------------------------------------------------------------------------------ 00:10:32.368 0,0 486272/s 1899 MiB/s 0 0 00:10:32.368 ==================================================================================== 00:10:32.368 Total 486272/s 1899 MiB/s 0 0' 00:10:32.368 11:54:37 -- accel/accel.sh@20 -- # IFS=: 00:10:32.368 11:54:37 -- accel/accel.sh@20 -- # read -r var val 00:10:32.368 11:54:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:32.368 11:54:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:32.369 11:54:37 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.369 11:54:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.369 11:54:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.369 11:54:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.369 11:54:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.369 11:54:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.369 11:54:37 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.369 11:54:37 -- accel/accel.sh@42 -- # jq -r . 00:10:32.369 [2024-11-29 11:54:37.853341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:32.369 [2024-11-29 11:54:37.853449] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68240 ] 00:10:32.627 [2024-11-29 11:54:37.988344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.627 [2024-11-29 11:54:38.117361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.885 11:54:38 -- accel/accel.sh@21 -- # val= 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val= 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val=0x1 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val= 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val= 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val=fill 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val=0x80 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val= 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val=software 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@23 -- # accel_module=software 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val=64 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val=64 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val=1 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val=Yes 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val= 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:32.886 11:54:38 -- accel/accel.sh@21 -- # val= 00:10:32.886 11:54:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # IFS=: 00:10:32.886 11:54:38 -- accel/accel.sh@20 -- # read -r var val 00:10:34.262 11:54:39 -- accel/accel.sh@21 -- # val= 00:10:34.262 11:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.262 11:54:39 -- accel/accel.sh@20 -- # IFS=: 00:10:34.262 11:54:39 -- accel/accel.sh@20 -- # read -r var val 00:10:34.262 11:54:39 -- accel/accel.sh@21 -- # val= 00:10:34.262 11:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.262 11:54:39 -- accel/accel.sh@20 -- # IFS=: 00:10:34.262 11:54:39 -- accel/accel.sh@20 -- # read -r var val 00:10:34.262 11:54:39 -- accel/accel.sh@21 -- # val= 00:10:34.262 11:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.262 11:54:39 -- accel/accel.sh@20 -- # IFS=: 00:10:34.262 11:54:39 -- accel/accel.sh@20 -- # read -r var val 00:10:34.262 11:54:39 -- accel/accel.sh@21 -- # val= 00:10:34.262 11:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.262 11:54:39 -- accel/accel.sh@20 -- # IFS=: 00:10:34.262 11:54:39 -- accel/accel.sh@20 -- # read -r var val 00:10:34.262 11:54:39 -- accel/accel.sh@21 -- # val= 00:10:34.262 11:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.262 11:54:39 -- accel/accel.sh@20 -- # IFS=: 00:10:34.262 11:54:39 -- accel/accel.sh@20 -- # read -r var val 00:10:34.262 11:54:39 -- accel/accel.sh@21 -- # val= 00:10:34.262 11:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.262 11:54:39 -- accel/accel.sh@20 -- # IFS=: 00:10:34.262 11:54:39 -- accel/accel.sh@20 -- # read -r var val 00:10:34.262 11:54:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:34.262 ************************************ 00:10:34.262 END TEST accel_fill 00:10:34.262 ************************************ 00:10:34.262 11:54:39 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:34.262 11:54:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:34.262 00:10:34.262 real 0m3.247s 00:10:34.262 user 0m2.721s 00:10:34.262 sys 0m0.319s 00:10:34.263 11:54:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:34.263 11:54:39 -- common/autotest_common.sh@10 -- # set +x 00:10:34.263 11:54:39 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:34.263 11:54:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:34.263 11:54:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:34.263 11:54:39 -- common/autotest_common.sh@10 -- # set +x 00:10:34.263 ************************************ 00:10:34.263 START TEST accel_copy_crc32c 00:10:34.263 ************************************ 00:10:34.263 11:54:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:10:34.263 11:54:39 -- accel/accel.sh@16 -- # local accel_opc 00:10:34.263 11:54:39 -- accel/accel.sh@17 -- # local accel_module 00:10:34.263 11:54:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:34.263 11:54:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:34.263 11:54:39 -- accel/accel.sh@12 -- # build_accel_config 00:10:34.263 11:54:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:34.263 11:54:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.263 11:54:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.263 11:54:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:34.263 11:54:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:34.263 11:54:39 -- accel/accel.sh@41 -- # local IFS=, 00:10:34.263 11:54:39 -- accel/accel.sh@42 -- # jq -r . 00:10:34.263 [2024-11-29 11:54:39.523897] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:34.263 [2024-11-29 11:54:39.524229] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68280 ] 00:10:34.263 [2024-11-29 11:54:39.665314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.522 [2024-11-29 11:54:39.807286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.900 11:54:41 -- accel/accel.sh@18 -- # out=' 00:10:35.900 SPDK Configuration: 00:10:35.900 Core mask: 0x1 00:10:35.900 00:10:35.900 Accel Perf Configuration: 00:10:35.900 Workload Type: copy_crc32c 00:10:35.900 CRC-32C seed: 0 00:10:35.900 Vector size: 4096 bytes 00:10:35.900 Transfer size: 4096 bytes 00:10:35.900 Vector count 1 00:10:35.900 Module: software 00:10:35.900 Queue depth: 32 00:10:35.900 Allocate depth: 32 00:10:35.900 # threads/core: 1 00:10:35.900 Run time: 1 seconds 00:10:35.900 Verify: Yes 00:10:35.900 00:10:35.900 Running for 1 seconds... 00:10:35.900 00:10:35.900 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:35.900 ------------------------------------------------------------------------------------ 00:10:35.900 0,0 247136/s 965 MiB/s 0 0 00:10:35.900 ==================================================================================== 00:10:35.900 Total 247136/s 965 MiB/s 0 0' 00:10:35.900 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:35.900 11:54:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:35.900 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:35.900 11:54:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:35.900 11:54:41 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.900 11:54:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.900 11:54:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.900 11:54:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.900 11:54:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.900 11:54:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.900 11:54:41 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.900 11:54:41 -- accel/accel.sh@42 -- # jq -r . 00:10:35.900 [2024-11-29 11:54:41.160532] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:35.900 [2024-11-29 11:54:41.160719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68299 ] 00:10:35.900 [2024-11-29 11:54:41.304922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.159 [2024-11-29 11:54:41.446442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val= 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val= 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val=0x1 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val= 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val= 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val=0 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val= 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val=software 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@23 -- # accel_module=software 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val=32 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val=32 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val=1 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val=Yes 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val= 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:36.159 11:54:41 -- accel/accel.sh@21 -- # val= 00:10:36.159 11:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # IFS=: 00:10:36.159 11:54:41 -- accel/accel.sh@20 -- # read -r var val 00:10:37.538 11:54:42 -- accel/accel.sh@21 -- # val= 00:10:37.538 11:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.538 11:54:42 -- accel/accel.sh@20 -- # IFS=: 00:10:37.538 11:54:42 -- accel/accel.sh@20 -- # read -r var val 00:10:37.538 11:54:42 -- accel/accel.sh@21 -- # val= 00:10:37.538 11:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.538 11:54:42 -- accel/accel.sh@20 -- # IFS=: 00:10:37.538 11:54:42 -- accel/accel.sh@20 -- # read -r var val 00:10:37.538 11:54:42 -- accel/accel.sh@21 -- # val= 00:10:37.538 11:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.538 11:54:42 -- accel/accel.sh@20 -- # IFS=: 00:10:37.538 11:54:42 -- accel/accel.sh@20 -- # read -r var val 00:10:37.538 11:54:42 -- accel/accel.sh@21 -- # val= 00:10:37.538 ************************************ 00:10:37.538 END TEST accel_copy_crc32c 00:10:37.538 ************************************ 00:10:37.538 11:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.538 11:54:42 -- accel/accel.sh@20 -- # IFS=: 00:10:37.538 11:54:42 -- accel/accel.sh@20 -- # read -r var val 00:10:37.538 11:54:42 -- accel/accel.sh@21 -- # val= 00:10:37.538 11:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.538 11:54:42 -- accel/accel.sh@20 -- # IFS=: 00:10:37.538 11:54:42 -- accel/accel.sh@20 -- # read -r var val 00:10:37.538 11:54:42 -- accel/accel.sh@21 -- # val= 00:10:37.538 11:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.538 11:54:42 -- accel/accel.sh@20 -- # IFS=: 00:10:37.538 11:54:42 -- accel/accel.sh@20 -- # read -r var val 00:10:37.538 11:54:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:37.538 11:54:42 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:37.538 11:54:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:37.538 00:10:37.538 real 0m3.277s 00:10:37.538 user 0m2.741s 00:10:37.538 sys 0m0.331s 00:10:37.538 11:54:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:37.538 11:54:42 -- common/autotest_common.sh@10 -- # set +x 00:10:37.538 11:54:42 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:37.538 11:54:42 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:37.538 11:54:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:37.538 11:54:42 -- common/autotest_common.sh@10 -- # set +x 00:10:37.538 ************************************ 00:10:37.538 START TEST accel_copy_crc32c_C2 00:10:37.538 ************************************ 00:10:37.538 11:54:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:37.538 11:54:42 -- accel/accel.sh@16 -- # local accel_opc 00:10:37.538 11:54:42 -- accel/accel.sh@17 -- # local accel_module 00:10:37.538 11:54:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:37.538 11:54:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:37.538 11:54:42 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.538 11:54:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.538 11:54:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.538 11:54:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.538 11:54:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.538 11:54:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.538 11:54:42 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.538 11:54:42 -- accel/accel.sh@42 -- # jq -r . 00:10:37.538 [2024-11-29 11:54:42.864017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:37.538 [2024-11-29 11:54:42.864149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68334 ] 00:10:37.538 [2024-11-29 11:54:42.999799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.798 [2024-11-29 11:54:43.137988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.176 11:54:44 -- accel/accel.sh@18 -- # out=' 00:10:39.176 SPDK Configuration: 00:10:39.176 Core mask: 0x1 00:10:39.176 00:10:39.176 Accel Perf Configuration: 00:10:39.176 Workload Type: copy_crc32c 00:10:39.176 CRC-32C seed: 0 00:10:39.176 Vector size: 4096 bytes 00:10:39.176 Transfer size: 8192 bytes 00:10:39.176 Vector count 2 00:10:39.176 Module: software 00:10:39.176 Queue depth: 32 00:10:39.176 Allocate depth: 32 00:10:39.176 # threads/core: 1 00:10:39.176 Run time: 1 seconds 00:10:39.176 Verify: Yes 00:10:39.176 00:10:39.176 Running for 1 seconds... 00:10:39.176 00:10:39.176 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:39.176 ------------------------------------------------------------------------------------ 00:10:39.176 0,0 174656/s 1364 MiB/s 0 0 00:10:39.176 ==================================================================================== 00:10:39.176 Total 174656/s 682 MiB/s 0 0' 00:10:39.176 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.176 11:54:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:39.176 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.176 11:54:44 -- accel/accel.sh@12 -- # build_accel_config 00:10:39.176 11:54:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:39.176 11:54:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:39.176 11:54:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.176 11:54:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.176 11:54:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:39.176 11:54:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:39.176 11:54:44 -- accel/accel.sh@41 -- # local IFS=, 00:10:39.176 11:54:44 -- accel/accel.sh@42 -- # jq -r . 00:10:39.176 [2024-11-29 11:54:44.479379] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:39.176 [2024-11-29 11:54:44.479613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68359 ] 00:10:39.176 [2024-11-29 11:54:44.621279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.436 [2024-11-29 11:54:44.752357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val= 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val= 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val=0x1 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val= 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val= 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val=0 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val='8192 bytes' 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val= 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val=software 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@23 -- # accel_module=software 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val=32 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val=32 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val=1 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val=Yes 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val= 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:39.436 11:54:44 -- accel/accel.sh@21 -- # val= 00:10:39.436 11:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # IFS=: 00:10:39.436 11:54:44 -- accel/accel.sh@20 -- # read -r var val 00:10:40.813 11:54:46 -- accel/accel.sh@21 -- # val= 00:10:40.813 11:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.813 11:54:46 -- accel/accel.sh@20 -- # IFS=: 00:10:40.813 11:54:46 -- accel/accel.sh@20 -- # read -r var val 00:10:40.813 11:54:46 -- accel/accel.sh@21 -- # val= 00:10:40.813 11:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.813 11:54:46 -- accel/accel.sh@20 -- # IFS=: 00:10:40.814 11:54:46 -- accel/accel.sh@20 -- # read -r var val 00:10:40.814 11:54:46 -- accel/accel.sh@21 -- # val= 00:10:40.814 11:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.814 11:54:46 -- accel/accel.sh@20 -- # IFS=: 00:10:40.814 11:54:46 -- accel/accel.sh@20 -- # read -r var val 00:10:40.814 11:54:46 -- accel/accel.sh@21 -- # val= 00:10:40.814 11:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.814 11:54:46 -- accel/accel.sh@20 -- # IFS=: 00:10:40.814 11:54:46 -- accel/accel.sh@20 -- # read -r var val 00:10:40.814 11:54:46 -- accel/accel.sh@21 -- # val= 00:10:40.814 11:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.814 11:54:46 -- accel/accel.sh@20 -- # IFS=: 00:10:40.814 11:54:46 -- accel/accel.sh@20 -- # read -r var val 00:10:40.814 11:54:46 -- accel/accel.sh@21 -- # val= 00:10:40.814 11:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.814 11:54:46 -- accel/accel.sh@20 -- # IFS=: 00:10:40.814 11:54:46 -- accel/accel.sh@20 -- # read -r var val 00:10:40.814 11:54:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:40.814 11:54:46 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:40.814 11:54:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:40.814 00:10:40.814 real 0m3.230s 00:10:40.814 user 0m2.707s 00:10:40.814 sys 0m0.317s 00:10:40.814 11:54:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:40.814 11:54:46 -- common/autotest_common.sh@10 -- # set +x 00:10:40.814 ************************************ 00:10:40.814 END TEST accel_copy_crc32c_C2 00:10:40.814 ************************************ 00:10:40.814 11:54:46 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:40.814 11:54:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:40.814 11:54:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:40.814 11:54:46 -- common/autotest_common.sh@10 -- # set +x 00:10:40.814 ************************************ 00:10:40.814 START TEST accel_dualcast 00:10:40.814 ************************************ 00:10:40.814 11:54:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:10:40.814 11:54:46 -- accel/accel.sh@16 -- # local accel_opc 00:10:40.814 11:54:46 -- accel/accel.sh@17 -- # local accel_module 00:10:40.814 11:54:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:10:40.814 11:54:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:40.814 11:54:46 -- accel/accel.sh@12 -- # build_accel_config 00:10:40.814 11:54:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:40.814 11:54:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:40.814 11:54:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:40.814 11:54:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:40.814 11:54:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:40.814 11:54:46 -- accel/accel.sh@41 -- # local IFS=, 00:10:40.814 11:54:46 -- accel/accel.sh@42 -- # jq -r . 00:10:40.814 [2024-11-29 11:54:46.144372] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:40.814 [2024-11-29 11:54:46.144546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68388 ] 00:10:40.814 [2024-11-29 11:54:46.280721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.073 [2024-11-29 11:54:46.407588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.453 11:54:47 -- accel/accel.sh@18 -- # out=' 00:10:42.453 SPDK Configuration: 00:10:42.453 Core mask: 0x1 00:10:42.453 00:10:42.453 Accel Perf Configuration: 00:10:42.453 Workload Type: dualcast 00:10:42.453 Transfer size: 4096 bytes 00:10:42.453 Vector count 1 00:10:42.453 Module: software 00:10:42.453 Queue depth: 32 00:10:42.453 Allocate depth: 32 00:10:42.453 # threads/core: 1 00:10:42.453 Run time: 1 seconds 00:10:42.453 Verify: Yes 00:10:42.453 00:10:42.453 Running for 1 seconds... 00:10:42.453 00:10:42.453 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:42.453 ------------------------------------------------------------------------------------ 00:10:42.453 0,0 351008/s 1371 MiB/s 0 0 00:10:42.453 ==================================================================================== 00:10:42.453 Total 351008/s 1371 MiB/s 0 0' 00:10:42.453 11:54:47 -- accel/accel.sh@20 -- # IFS=: 00:10:42.453 11:54:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:42.453 11:54:47 -- accel/accel.sh@20 -- # read -r var val 00:10:42.453 11:54:47 -- accel/accel.sh@12 -- # build_accel_config 00:10:42.453 11:54:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:42.453 11:54:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:42.453 11:54:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.453 11:54:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.453 11:54:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:42.453 11:54:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:42.453 11:54:47 -- accel/accel.sh@41 -- # local IFS=, 00:10:42.453 11:54:47 -- accel/accel.sh@42 -- # jq -r . 00:10:42.453 [2024-11-29 11:54:47.749758] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:42.453 [2024-11-29 11:54:47.749901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68413 ] 00:10:42.453 [2024-11-29 11:54:47.887319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.712 [2024-11-29 11:54:48.020229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val= 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val= 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val=0x1 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val= 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val= 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val=dualcast 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val= 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val=software 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@23 -- # accel_module=software 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val=32 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val=32 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val=1 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val=Yes 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val= 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:42.712 11:54:48 -- accel/accel.sh@21 -- # val= 00:10:42.712 11:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # IFS=: 00:10:42.712 11:54:48 -- accel/accel.sh@20 -- # read -r var val 00:10:44.090 11:54:49 -- accel/accel.sh@21 -- # val= 00:10:44.090 11:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.090 11:54:49 -- accel/accel.sh@20 -- # IFS=: 00:10:44.090 11:54:49 -- accel/accel.sh@20 -- # read -r var val 00:10:44.090 11:54:49 -- accel/accel.sh@21 -- # val= 00:10:44.090 11:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.090 11:54:49 -- accel/accel.sh@20 -- # IFS=: 00:10:44.090 11:54:49 -- accel/accel.sh@20 -- # read -r var val 00:10:44.090 11:54:49 -- accel/accel.sh@21 -- # val= 00:10:44.090 11:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.090 11:54:49 -- accel/accel.sh@20 -- # IFS=: 00:10:44.090 11:54:49 -- accel/accel.sh@20 -- # read -r var val 00:10:44.090 11:54:49 -- accel/accel.sh@21 -- # val= 00:10:44.090 11:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.090 11:54:49 -- accel/accel.sh@20 -- # IFS=: 00:10:44.090 11:54:49 -- accel/accel.sh@20 -- # read -r var val 00:10:44.090 11:54:49 -- accel/accel.sh@21 -- # val= 00:10:44.090 11:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.090 11:54:49 -- accel/accel.sh@20 -- # IFS=: 00:10:44.090 11:54:49 -- accel/accel.sh@20 -- # read -r var val 00:10:44.090 ************************************ 00:10:44.090 END TEST accel_dualcast 00:10:44.090 ************************************ 00:10:44.090 11:54:49 -- accel/accel.sh@21 -- # val= 00:10:44.090 11:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.090 11:54:49 -- accel/accel.sh@20 -- # IFS=: 00:10:44.090 11:54:49 -- accel/accel.sh@20 -- # read -r var val 00:10:44.090 11:54:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:44.090 11:54:49 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:10:44.090 11:54:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:44.090 00:10:44.090 real 0m3.219s 00:10:44.090 user 0m2.713s 00:10:44.090 sys 0m0.301s 00:10:44.090 11:54:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:44.090 11:54:49 -- common/autotest_common.sh@10 -- # set +x 00:10:44.090 11:54:49 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:44.090 11:54:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:44.090 11:54:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:44.090 11:54:49 -- common/autotest_common.sh@10 -- # set +x 00:10:44.090 ************************************ 00:10:44.090 START TEST accel_compare 00:10:44.090 ************************************ 00:10:44.090 11:54:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:10:44.090 11:54:49 -- accel/accel.sh@16 -- # local accel_opc 00:10:44.090 11:54:49 -- accel/accel.sh@17 -- # local accel_module 00:10:44.090 11:54:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:10:44.090 11:54:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:44.090 11:54:49 -- accel/accel.sh@12 -- # build_accel_config 00:10:44.090 11:54:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:44.090 11:54:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.090 11:54:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.090 11:54:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:44.090 11:54:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:44.090 11:54:49 -- accel/accel.sh@41 -- # local IFS=, 00:10:44.090 11:54:49 -- accel/accel.sh@42 -- # jq -r . 00:10:44.090 [2024-11-29 11:54:49.414252] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:44.090 [2024-11-29 11:54:49.414378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68447 ] 00:10:44.090 [2024-11-29 11:54:49.549736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.348 [2024-11-29 11:54:49.689426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.724 11:54:50 -- accel/accel.sh@18 -- # out=' 00:10:45.724 SPDK Configuration: 00:10:45.724 Core mask: 0x1 00:10:45.724 00:10:45.724 Accel Perf Configuration: 00:10:45.724 Workload Type: compare 00:10:45.724 Transfer size: 4096 bytes 00:10:45.724 Vector count 1 00:10:45.724 Module: software 00:10:45.724 Queue depth: 32 00:10:45.724 Allocate depth: 32 00:10:45.724 # threads/core: 1 00:10:45.724 Run time: 1 seconds 00:10:45.724 Verify: Yes 00:10:45.724 00:10:45.724 Running for 1 seconds... 00:10:45.724 00:10:45.724 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:45.724 ------------------------------------------------------------------------------------ 00:10:45.724 0,0 451616/s 1764 MiB/s 0 0 00:10:45.724 ==================================================================================== 00:10:45.724 Total 451616/s 1764 MiB/s 0 0' 00:10:45.724 11:54:50 -- accel/accel.sh@20 -- # IFS=: 00:10:45.724 11:54:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:45.724 11:54:50 -- accel/accel.sh@20 -- # read -r var val 00:10:45.724 11:54:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:45.724 11:54:50 -- accel/accel.sh@12 -- # build_accel_config 00:10:45.724 11:54:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:45.724 11:54:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.724 11:54:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.724 11:54:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:45.724 11:54:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:45.724 11:54:50 -- accel/accel.sh@41 -- # local IFS=, 00:10:45.724 11:54:50 -- accel/accel.sh@42 -- # jq -r . 00:10:45.724 [2024-11-29 11:54:51.019042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:45.724 [2024-11-29 11:54:51.019193] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68467 ] 00:10:45.724 [2024-11-29 11:54:51.157036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.983 [2024-11-29 11:54:51.287662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val= 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val= 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val=0x1 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val= 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val= 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val=compare 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val= 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val=software 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@23 -- # accel_module=software 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val=32 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val=32 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val=1 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val=Yes 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val= 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:45.983 11:54:51 -- accel/accel.sh@21 -- # val= 00:10:45.983 11:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # IFS=: 00:10:45.983 11:54:51 -- accel/accel.sh@20 -- # read -r var val 00:10:47.361 11:54:52 -- accel/accel.sh@21 -- # val= 00:10:47.361 11:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.361 11:54:52 -- accel/accel.sh@20 -- # IFS=: 00:10:47.361 11:54:52 -- accel/accel.sh@20 -- # read -r var val 00:10:47.361 11:54:52 -- accel/accel.sh@21 -- # val= 00:10:47.361 11:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.361 11:54:52 -- accel/accel.sh@20 -- # IFS=: 00:10:47.361 11:54:52 -- accel/accel.sh@20 -- # read -r var val 00:10:47.361 11:54:52 -- accel/accel.sh@21 -- # val= 00:10:47.361 11:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.361 11:54:52 -- accel/accel.sh@20 -- # IFS=: 00:10:47.361 11:54:52 -- accel/accel.sh@20 -- # read -r var val 00:10:47.361 11:54:52 -- accel/accel.sh@21 -- # val= 00:10:47.361 11:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.361 11:54:52 -- accel/accel.sh@20 -- # IFS=: 00:10:47.361 11:54:52 -- accel/accel.sh@20 -- # read -r var val 00:10:47.361 11:54:52 -- accel/accel.sh@21 -- # val= 00:10:47.361 11:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.361 11:54:52 -- accel/accel.sh@20 -- # IFS=: 00:10:47.361 11:54:52 -- accel/accel.sh@20 -- # read -r var val 00:10:47.361 11:54:52 -- accel/accel.sh@21 -- # val= 00:10:47.361 11:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.361 11:54:52 -- accel/accel.sh@20 -- # IFS=: 00:10:47.361 11:54:52 -- accel/accel.sh@20 -- # read -r var val 00:10:47.361 11:54:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:47.361 11:54:52 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:47.361 11:54:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:47.361 00:10:47.361 real 0m3.212s 00:10:47.361 user 0m2.691s 00:10:47.361 sys 0m0.315s 00:10:47.361 11:54:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:47.361 11:54:52 -- common/autotest_common.sh@10 -- # set +x 00:10:47.361 ************************************ 00:10:47.361 END TEST accel_compare 00:10:47.361 ************************************ 00:10:47.361 11:54:52 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:47.361 11:54:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:47.361 11:54:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.361 11:54:52 -- common/autotest_common.sh@10 -- # set +x 00:10:47.361 ************************************ 00:10:47.361 START TEST accel_xor 00:10:47.361 ************************************ 00:10:47.361 11:54:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:10:47.361 11:54:52 -- accel/accel.sh@16 -- # local accel_opc 00:10:47.361 11:54:52 -- accel/accel.sh@17 -- # local accel_module 00:10:47.361 11:54:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:47.361 11:54:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:47.361 11:54:52 -- accel/accel.sh@12 -- # build_accel_config 00:10:47.361 11:54:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:47.361 11:54:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.361 11:54:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.361 11:54:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:47.361 11:54:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:47.361 11:54:52 -- accel/accel.sh@41 -- # local IFS=, 00:10:47.361 11:54:52 -- accel/accel.sh@42 -- # jq -r . 00:10:47.361 [2024-11-29 11:54:52.675055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:47.361 [2024-11-29 11:54:52.675503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68507 ] 00:10:47.361 [2024-11-29 11:54:52.816282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.621 [2024-11-29 11:54:52.947942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.996 11:54:54 -- accel/accel.sh@18 -- # out=' 00:10:48.996 SPDK Configuration: 00:10:48.996 Core mask: 0x1 00:10:48.996 00:10:48.996 Accel Perf Configuration: 00:10:48.996 Workload Type: xor 00:10:48.996 Source buffers: 2 00:10:48.996 Transfer size: 4096 bytes 00:10:48.996 Vector count 1 00:10:48.996 Module: software 00:10:48.996 Queue depth: 32 00:10:48.996 Allocate depth: 32 00:10:48.996 # threads/core: 1 00:10:48.996 Run time: 1 seconds 00:10:48.996 Verify: Yes 00:10:48.996 00:10:48.996 Running for 1 seconds... 00:10:48.996 00:10:48.996 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:48.996 ------------------------------------------------------------------------------------ 00:10:48.996 0,0 226592/s 885 MiB/s 0 0 00:10:48.996 ==================================================================================== 00:10:48.996 Total 226592/s 885 MiB/s 0 0' 00:10:48.996 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:48.996 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:48.996 11:54:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:48.996 11:54:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:48.996 11:54:54 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.996 11:54:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:48.996 11:54:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.996 11:54:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.996 11:54:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:48.996 11:54:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:48.996 11:54:54 -- accel/accel.sh@41 -- # local IFS=, 00:10:48.996 11:54:54 -- accel/accel.sh@42 -- # jq -r . 00:10:48.996 [2024-11-29 11:54:54.277366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:48.996 [2024-11-29 11:54:54.277800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68521 ] 00:10:48.996 [2024-11-29 11:54:54.412769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.255 [2024-11-29 11:54:54.537822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val= 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val= 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val=0x1 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val= 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val= 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val=xor 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.255 11:54:54 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val=2 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val= 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val=software 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.255 11:54:54 -- accel/accel.sh@23 -- # accel_module=software 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val=32 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val=32 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val=1 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val=Yes 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.255 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.255 11:54:54 -- accel/accel.sh@21 -- # val= 00:10:49.255 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.256 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.256 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:49.256 11:54:54 -- accel/accel.sh@21 -- # val= 00:10:49.256 11:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.256 11:54:54 -- accel/accel.sh@20 -- # IFS=: 00:10:49.256 11:54:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.629 11:54:55 -- accel/accel.sh@21 -- # val= 00:10:50.629 11:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.629 11:54:55 -- accel/accel.sh@20 -- # IFS=: 00:10:50.629 11:54:55 -- accel/accel.sh@20 -- # read -r var val 00:10:50.629 11:54:55 -- accel/accel.sh@21 -- # val= 00:10:50.629 11:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.629 11:54:55 -- accel/accel.sh@20 -- # IFS=: 00:10:50.629 11:54:55 -- accel/accel.sh@20 -- # read -r var val 00:10:50.629 11:54:55 -- accel/accel.sh@21 -- # val= 00:10:50.629 11:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.629 11:54:55 -- accel/accel.sh@20 -- # IFS=: 00:10:50.629 11:54:55 -- accel/accel.sh@20 -- # read -r var val 00:10:50.629 11:54:55 -- accel/accel.sh@21 -- # val= 00:10:50.629 11:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.629 11:54:55 -- accel/accel.sh@20 -- # IFS=: 00:10:50.629 11:54:55 -- accel/accel.sh@20 -- # read -r var val 00:10:50.629 11:54:55 -- accel/accel.sh@21 -- # val= 00:10:50.629 11:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.629 11:54:55 -- accel/accel.sh@20 -- # IFS=: 00:10:50.629 11:54:55 -- accel/accel.sh@20 -- # read -r var val 00:10:50.629 11:54:55 -- accel/accel.sh@21 -- # val= 00:10:50.629 11:54:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.629 11:54:55 -- accel/accel.sh@20 -- # IFS=: 00:10:50.629 11:54:55 -- accel/accel.sh@20 -- # read -r var val 00:10:50.629 11:54:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:50.629 11:54:55 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:50.629 11:54:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:50.629 00:10:50.629 real 0m3.191s 00:10:50.629 user 0m2.689s 00:10:50.629 sys 0m0.294s 00:10:50.629 ************************************ 00:10:50.629 END TEST accel_xor 00:10:50.629 ************************************ 00:10:50.629 11:54:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:50.629 11:54:55 -- common/autotest_common.sh@10 -- # set +x 00:10:50.629 11:54:55 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:50.629 11:54:55 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:50.629 11:54:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:50.629 11:54:55 -- common/autotest_common.sh@10 -- # set +x 00:10:50.629 ************************************ 00:10:50.629 START TEST accel_xor 00:10:50.629 ************************************ 00:10:50.629 11:54:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:10:50.629 11:54:55 -- accel/accel.sh@16 -- # local accel_opc 00:10:50.629 11:54:55 -- accel/accel.sh@17 -- # local accel_module 00:10:50.629 11:54:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:10:50.629 11:54:55 -- accel/accel.sh@12 -- # build_accel_config 00:10:50.629 11:54:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:50.629 11:54:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:50.629 11:54:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.629 11:54:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.629 11:54:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:50.629 11:54:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:50.629 11:54:55 -- accel/accel.sh@41 -- # local IFS=, 00:10:50.629 11:54:55 -- accel/accel.sh@42 -- # jq -r . 00:10:50.629 [2024-11-29 11:54:55.912691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:50.629 [2024-11-29 11:54:55.913031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68561 ] 00:10:50.629 [2024-11-29 11:54:56.047608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.887 [2024-11-29 11:54:56.173569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.258 11:54:57 -- accel/accel.sh@18 -- # out=' 00:10:52.258 SPDK Configuration: 00:10:52.258 Core mask: 0x1 00:10:52.258 00:10:52.258 Accel Perf Configuration: 00:10:52.258 Workload Type: xor 00:10:52.258 Source buffers: 3 00:10:52.258 Transfer size: 4096 bytes 00:10:52.258 Vector count 1 00:10:52.258 Module: software 00:10:52.258 Queue depth: 32 00:10:52.258 Allocate depth: 32 00:10:52.258 # threads/core: 1 00:10:52.258 Run time: 1 seconds 00:10:52.258 Verify: Yes 00:10:52.258 00:10:52.258 Running for 1 seconds... 00:10:52.258 00:10:52.258 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:52.258 ------------------------------------------------------------------------------------ 00:10:52.258 0,0 218560/s 853 MiB/s 0 0 00:10:52.258 ==================================================================================== 00:10:52.258 Total 218560/s 853 MiB/s 0 0' 00:10:52.258 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.258 11:54:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:52.258 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.258 11:54:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:52.258 11:54:57 -- accel/accel.sh@12 -- # build_accel_config 00:10:52.258 11:54:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:52.258 11:54:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.258 11:54:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.258 11:54:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:52.258 11:54:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:52.258 11:54:57 -- accel/accel.sh@41 -- # local IFS=, 00:10:52.258 11:54:57 -- accel/accel.sh@42 -- # jq -r . 00:10:52.258 [2024-11-29 11:54:57.529121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:52.258 [2024-11-29 11:54:57.529244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68575 ] 00:10:52.258 [2024-11-29 11:54:57.668457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.516 [2024-11-29 11:54:57.801149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val= 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val= 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val=0x1 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val= 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val= 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val=xor 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val=3 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val= 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val=software 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@23 -- # accel_module=software 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val=32 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val=32 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val=1 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val=Yes 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val= 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:52.516 11:54:57 -- accel/accel.sh@21 -- # val= 00:10:52.516 11:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # IFS=: 00:10:52.516 11:54:57 -- accel/accel.sh@20 -- # read -r var val 00:10:53.892 11:54:59 -- accel/accel.sh@21 -- # val= 00:10:53.892 11:54:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.892 11:54:59 -- accel/accel.sh@20 -- # IFS=: 00:10:53.892 11:54:59 -- accel/accel.sh@20 -- # read -r var val 00:10:53.892 11:54:59 -- accel/accel.sh@21 -- # val= 00:10:53.892 11:54:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.892 11:54:59 -- accel/accel.sh@20 -- # IFS=: 00:10:53.892 11:54:59 -- accel/accel.sh@20 -- # read -r var val 00:10:53.892 11:54:59 -- accel/accel.sh@21 -- # val= 00:10:53.892 11:54:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.892 11:54:59 -- accel/accel.sh@20 -- # IFS=: 00:10:53.892 11:54:59 -- accel/accel.sh@20 -- # read -r var val 00:10:53.892 11:54:59 -- accel/accel.sh@21 -- # val= 00:10:53.892 11:54:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.892 11:54:59 -- accel/accel.sh@20 -- # IFS=: 00:10:53.892 ************************************ 00:10:53.892 END TEST accel_xor 00:10:53.892 ************************************ 00:10:53.892 11:54:59 -- accel/accel.sh@20 -- # read -r var val 00:10:53.892 11:54:59 -- accel/accel.sh@21 -- # val= 00:10:53.892 11:54:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.892 11:54:59 -- accel/accel.sh@20 -- # IFS=: 00:10:53.892 11:54:59 -- accel/accel.sh@20 -- # read -r var val 00:10:53.892 11:54:59 -- accel/accel.sh@21 -- # val= 00:10:53.892 11:54:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.892 11:54:59 -- accel/accel.sh@20 -- # IFS=: 00:10:53.892 11:54:59 -- accel/accel.sh@20 -- # read -r var val 00:10:53.892 11:54:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:53.892 11:54:59 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:53.892 11:54:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:53.892 00:10:53.892 real 0m3.248s 00:10:53.892 user 0m2.743s 00:10:53.892 sys 0m0.300s 00:10:53.892 11:54:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:53.892 11:54:59 -- common/autotest_common.sh@10 -- # set +x 00:10:53.892 11:54:59 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:53.892 11:54:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:53.892 11:54:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.892 11:54:59 -- common/autotest_common.sh@10 -- # set +x 00:10:53.892 ************************************ 00:10:53.892 START TEST accel_dif_verify 00:10:53.892 ************************************ 00:10:53.892 11:54:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:10:53.892 11:54:59 -- accel/accel.sh@16 -- # local accel_opc 00:10:53.892 11:54:59 -- accel/accel.sh@17 -- # local accel_module 00:10:53.892 11:54:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:10:53.892 11:54:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:53.892 11:54:59 -- accel/accel.sh@12 -- # build_accel_config 00:10:53.892 11:54:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:53.892 11:54:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:53.892 11:54:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:53.892 11:54:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:53.892 11:54:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:53.892 11:54:59 -- accel/accel.sh@41 -- # local IFS=, 00:10:53.892 11:54:59 -- accel/accel.sh@42 -- # jq -r . 00:10:53.892 [2024-11-29 11:54:59.204932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:53.892 [2024-11-29 11:54:59.205026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68615 ] 00:10:53.892 [2024-11-29 11:54:59.340271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.151 [2024-11-29 11:54:59.474660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.529 11:55:00 -- accel/accel.sh@18 -- # out=' 00:10:55.529 SPDK Configuration: 00:10:55.529 Core mask: 0x1 00:10:55.529 00:10:55.529 Accel Perf Configuration: 00:10:55.529 Workload Type: dif_verify 00:10:55.529 Vector size: 4096 bytes 00:10:55.529 Transfer size: 4096 bytes 00:10:55.529 Block size: 512 bytes 00:10:55.529 Metadata size: 8 bytes 00:10:55.529 Vector count 1 00:10:55.529 Module: software 00:10:55.529 Queue depth: 32 00:10:55.529 Allocate depth: 32 00:10:55.529 # threads/core: 1 00:10:55.529 Run time: 1 seconds 00:10:55.529 Verify: No 00:10:55.529 00:10:55.529 Running for 1 seconds... 00:10:55.529 00:10:55.529 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:55.529 ------------------------------------------------------------------------------------ 00:10:55.529 0,0 100928/s 400 MiB/s 0 0 00:10:55.529 ==================================================================================== 00:10:55.529 Total 100928/s 394 MiB/s 0 0' 00:10:55.529 11:55:00 -- accel/accel.sh@20 -- # IFS=: 00:10:55.529 11:55:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:55.529 11:55:00 -- accel/accel.sh@20 -- # read -r var val 00:10:55.529 11:55:00 -- accel/accel.sh@12 -- # build_accel_config 00:10:55.529 11:55:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:55.529 11:55:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:55.529 11:55:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.529 11:55:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.529 11:55:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:55.529 11:55:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:55.529 11:55:00 -- accel/accel.sh@41 -- # local IFS=, 00:10:55.529 11:55:00 -- accel/accel.sh@42 -- # jq -r . 00:10:55.529 [2024-11-29 11:55:00.827338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:55.530 [2024-11-29 11:55:00.827492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68637 ] 00:10:55.530 [2024-11-29 11:55:00.964887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.788 [2024-11-29 11:55:01.102354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.788 11:55:01 -- accel/accel.sh@21 -- # val= 00:10:55.788 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.788 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.788 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.788 11:55:01 -- accel/accel.sh@21 -- # val= 00:10:55.788 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.788 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.788 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.788 11:55:01 -- accel/accel.sh@21 -- # val=0x1 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val= 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val= 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val=dif_verify 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val= 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val=software 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@23 -- # accel_module=software 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val=32 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val=32 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val=1 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val=No 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val= 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:55.789 11:55:01 -- accel/accel.sh@21 -- # val= 00:10:55.789 11:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # IFS=: 00:10:55.789 11:55:01 -- accel/accel.sh@20 -- # read -r var val 00:10:57.167 11:55:02 -- accel/accel.sh@21 -- # val= 00:10:57.167 11:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.167 11:55:02 -- accel/accel.sh@20 -- # IFS=: 00:10:57.167 11:55:02 -- accel/accel.sh@20 -- # read -r var val 00:10:57.167 11:55:02 -- accel/accel.sh@21 -- # val= 00:10:57.167 11:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.167 11:55:02 -- accel/accel.sh@20 -- # IFS=: 00:10:57.167 11:55:02 -- accel/accel.sh@20 -- # read -r var val 00:10:57.167 11:55:02 -- accel/accel.sh@21 -- # val= 00:10:57.167 11:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.167 11:55:02 -- accel/accel.sh@20 -- # IFS=: 00:10:57.167 11:55:02 -- accel/accel.sh@20 -- # read -r var val 00:10:57.167 11:55:02 -- accel/accel.sh@21 -- # val= 00:10:57.167 11:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.167 11:55:02 -- accel/accel.sh@20 -- # IFS=: 00:10:57.167 11:55:02 -- accel/accel.sh@20 -- # read -r var val 00:10:57.167 11:55:02 -- accel/accel.sh@21 -- # val= 00:10:57.167 11:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.167 11:55:02 -- accel/accel.sh@20 -- # IFS=: 00:10:57.167 11:55:02 -- accel/accel.sh@20 -- # read -r var val 00:10:57.167 11:55:02 -- accel/accel.sh@21 -- # val= 00:10:57.167 11:55:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.167 11:55:02 -- accel/accel.sh@20 -- # IFS=: 00:10:57.167 11:55:02 -- accel/accel.sh@20 -- # read -r var val 00:10:57.167 11:55:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:57.167 11:55:02 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:10:57.167 11:55:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:57.167 00:10:57.167 real 0m3.255s 00:10:57.167 user 0m2.737s 00:10:57.167 sys 0m0.313s 00:10:57.167 11:55:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:57.167 11:55:02 -- common/autotest_common.sh@10 -- # set +x 00:10:57.167 ************************************ 00:10:57.167 END TEST accel_dif_verify 00:10:57.167 ************************************ 00:10:57.167 11:55:02 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:57.167 11:55:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:57.167 11:55:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:57.167 11:55:02 -- common/autotest_common.sh@10 -- # set +x 00:10:57.167 ************************************ 00:10:57.167 START TEST accel_dif_generate 00:10:57.167 ************************************ 00:10:57.167 11:55:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:10:57.167 11:55:02 -- accel/accel.sh@16 -- # local accel_opc 00:10:57.167 11:55:02 -- accel/accel.sh@17 -- # local accel_module 00:10:57.167 11:55:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:10:57.167 11:55:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:57.167 11:55:02 -- accel/accel.sh@12 -- # build_accel_config 00:10:57.167 11:55:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:57.167 11:55:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:57.167 11:55:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:57.167 11:55:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:57.167 11:55:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:57.167 11:55:02 -- accel/accel.sh@41 -- # local IFS=, 00:10:57.167 11:55:02 -- accel/accel.sh@42 -- # jq -r . 00:10:57.167 [2024-11-29 11:55:02.509389] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:57.167 [2024-11-29 11:55:02.509545] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68671 ] 00:10:57.167 [2024-11-29 11:55:02.644386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.426 [2024-11-29 11:55:02.781782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.822 11:55:04 -- accel/accel.sh@18 -- # out=' 00:10:58.822 SPDK Configuration: 00:10:58.822 Core mask: 0x1 00:10:58.822 00:10:58.822 Accel Perf Configuration: 00:10:58.822 Workload Type: dif_generate 00:10:58.822 Vector size: 4096 bytes 00:10:58.822 Transfer size: 4096 bytes 00:10:58.822 Block size: 512 bytes 00:10:58.822 Metadata size: 8 bytes 00:10:58.822 Vector count 1 00:10:58.822 Module: software 00:10:58.822 Queue depth: 32 00:10:58.822 Allocate depth: 32 00:10:58.822 # threads/core: 1 00:10:58.822 Run time: 1 seconds 00:10:58.822 Verify: No 00:10:58.822 00:10:58.822 Running for 1 seconds... 00:10:58.822 00:10:58.822 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:58.822 ------------------------------------------------------------------------------------ 00:10:58.822 0,0 118368/s 469 MiB/s 0 0 00:10:58.822 ==================================================================================== 00:10:58.822 Total 118368/s 462 MiB/s 0 0' 00:10:58.822 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:58.822 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:58.822 11:55:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:58.822 11:55:04 -- accel/accel.sh@12 -- # build_accel_config 00:10:58.822 11:55:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:58.822 11:55:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:58.822 11:55:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:58.822 11:55:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.822 11:55:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:58.822 11:55:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:58.822 11:55:04 -- accel/accel.sh@41 -- # local IFS=, 00:10:58.822 11:55:04 -- accel/accel.sh@42 -- # jq -r . 00:10:58.822 [2024-11-29 11:55:04.150981] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:58.822 [2024-11-29 11:55:04.151118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68691 ] 00:10:58.822 [2024-11-29 11:55:04.287354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.081 [2024-11-29 11:55:04.427477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.081 11:55:04 -- accel/accel.sh@21 -- # val= 00:10:59.081 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.081 11:55:04 -- accel/accel.sh@21 -- # val= 00:10:59.081 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.081 11:55:04 -- accel/accel.sh@21 -- # val=0x1 00:10:59.081 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.081 11:55:04 -- accel/accel.sh@21 -- # val= 00:10:59.081 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.081 11:55:04 -- accel/accel.sh@21 -- # val= 00:10:59.081 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.081 11:55:04 -- accel/accel.sh@21 -- # val=dif_generate 00:10:59.081 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.081 11:55:04 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.081 11:55:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:59.081 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.081 11:55:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:59.081 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.081 11:55:04 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:59.081 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.081 11:55:04 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:59.081 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.081 11:55:04 -- accel/accel.sh@21 -- # val= 00:10:59.081 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.081 11:55:04 -- accel/accel.sh@21 -- # val=software 00:10:59.081 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.081 11:55:04 -- accel/accel.sh@23 -- # accel_module=software 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.081 11:55:04 -- accel/accel.sh@21 -- # val=32 00:10:59.081 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.081 11:55:04 -- accel/accel.sh@21 -- # val=32 00:10:59.081 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.081 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.081 11:55:04 -- accel/accel.sh@21 -- # val=1 00:10:59.082 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.082 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.082 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.082 11:55:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:59.082 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.082 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.082 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.082 11:55:04 -- accel/accel.sh@21 -- # val=No 00:10:59.082 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.082 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.082 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.082 11:55:04 -- accel/accel.sh@21 -- # val= 00:10:59.082 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.082 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.082 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:10:59.082 11:55:04 -- accel/accel.sh@21 -- # val= 00:10:59.082 11:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.082 11:55:04 -- accel/accel.sh@20 -- # IFS=: 00:10:59.082 11:55:04 -- accel/accel.sh@20 -- # read -r var val 00:11:00.459 11:55:05 -- accel/accel.sh@21 -- # val= 00:11:00.459 11:55:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.459 11:55:05 -- accel/accel.sh@20 -- # IFS=: 00:11:00.459 11:55:05 -- accel/accel.sh@20 -- # read -r var val 00:11:00.459 11:55:05 -- accel/accel.sh@21 -- # val= 00:11:00.459 11:55:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.459 11:55:05 -- accel/accel.sh@20 -- # IFS=: 00:11:00.459 11:55:05 -- accel/accel.sh@20 -- # read -r var val 00:11:00.459 11:55:05 -- accel/accel.sh@21 -- # val= 00:11:00.459 11:55:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.459 11:55:05 -- accel/accel.sh@20 -- # IFS=: 00:11:00.459 11:55:05 -- accel/accel.sh@20 -- # read -r var val 00:11:00.459 11:55:05 -- accel/accel.sh@21 -- # val= 00:11:00.459 11:55:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.459 11:55:05 -- accel/accel.sh@20 -- # IFS=: 00:11:00.459 11:55:05 -- accel/accel.sh@20 -- # read -r var val 00:11:00.459 11:55:05 -- accel/accel.sh@21 -- # val= 00:11:00.459 11:55:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.459 11:55:05 -- accel/accel.sh@20 -- # IFS=: 00:11:00.459 11:55:05 -- accel/accel.sh@20 -- # read -r var val 00:11:00.459 11:55:05 -- accel/accel.sh@21 -- # val= 00:11:00.459 11:55:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.459 11:55:05 -- accel/accel.sh@20 -- # IFS=: 00:11:00.459 11:55:05 -- accel/accel.sh@20 -- # read -r var val 00:11:00.459 11:55:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:00.459 11:55:05 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:11:00.459 11:55:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:00.459 00:11:00.459 real 0m3.262s 00:11:00.459 user 0m2.732s 00:11:00.459 sys 0m0.322s 00:11:00.459 11:55:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:00.459 ************************************ 00:11:00.459 END TEST accel_dif_generate 00:11:00.459 ************************************ 00:11:00.459 11:55:05 -- common/autotest_common.sh@10 -- # set +x 00:11:00.459 11:55:05 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:00.459 11:55:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:11:00.459 11:55:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:00.459 11:55:05 -- common/autotest_common.sh@10 -- # set +x 00:11:00.459 ************************************ 00:11:00.459 START TEST accel_dif_generate_copy 00:11:00.459 ************************************ 00:11:00.459 11:55:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:11:00.459 11:55:05 -- accel/accel.sh@16 -- # local accel_opc 00:11:00.459 11:55:05 -- accel/accel.sh@17 -- # local accel_module 00:11:00.459 11:55:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:11:00.459 11:55:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:00.459 11:55:05 -- accel/accel.sh@12 -- # build_accel_config 00:11:00.459 11:55:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:00.459 11:55:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:00.459 11:55:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:00.459 11:55:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:00.459 11:55:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:00.459 11:55:05 -- accel/accel.sh@41 -- # local IFS=, 00:11:00.459 11:55:05 -- accel/accel.sh@42 -- # jq -r . 00:11:00.459 [2024-11-29 11:55:05.823219] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:00.459 [2024-11-29 11:55:05.823655] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68731 ] 00:11:00.459 [2024-11-29 11:55:05.960451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.718 [2024-11-29 11:55:06.102089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.096 11:55:07 -- accel/accel.sh@18 -- # out=' 00:11:02.096 SPDK Configuration: 00:11:02.096 Core mask: 0x1 00:11:02.096 00:11:02.097 Accel Perf Configuration: 00:11:02.097 Workload Type: dif_generate_copy 00:11:02.097 Vector size: 4096 bytes 00:11:02.097 Transfer size: 4096 bytes 00:11:02.097 Vector count 1 00:11:02.097 Module: software 00:11:02.097 Queue depth: 32 00:11:02.097 Allocate depth: 32 00:11:02.097 # threads/core: 1 00:11:02.097 Run time: 1 seconds 00:11:02.097 Verify: No 00:11:02.097 00:11:02.097 Running for 1 seconds... 00:11:02.097 00:11:02.097 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:02.097 ------------------------------------------------------------------------------------ 00:11:02.097 0,0 92256/s 366 MiB/s 0 0 00:11:02.097 ==================================================================================== 00:11:02.097 Total 92256/s 360 MiB/s 0 0' 00:11:02.097 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.097 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.097 11:55:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:02.097 11:55:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:02.097 11:55:07 -- accel/accel.sh@12 -- # build_accel_config 00:11:02.097 11:55:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:02.097 11:55:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:02.097 11:55:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:02.097 11:55:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:02.097 11:55:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:02.097 11:55:07 -- accel/accel.sh@41 -- # local IFS=, 00:11:02.097 11:55:07 -- accel/accel.sh@42 -- # jq -r . 00:11:02.097 [2024-11-29 11:55:07.459254] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:02.097 [2024-11-29 11:55:07.459399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68750 ] 00:11:02.097 [2024-11-29 11:55:07.597226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.356 [2024-11-29 11:55:07.731402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val= 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val= 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val=0x1 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val= 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val= 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val= 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val=software 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@23 -- # accel_module=software 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val=32 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val=32 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val=1 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val=No 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val= 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:02.356 11:55:07 -- accel/accel.sh@21 -- # val= 00:11:02.356 11:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # IFS=: 00:11:02.356 11:55:07 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 11:55:09 -- accel/accel.sh@21 -- # val= 00:11:03.735 11:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 11:55:09 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 11:55:09 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 11:55:09 -- accel/accel.sh@21 -- # val= 00:11:03.735 11:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 11:55:09 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 11:55:09 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 11:55:09 -- accel/accel.sh@21 -- # val= 00:11:03.735 11:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 11:55:09 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 11:55:09 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 11:55:09 -- accel/accel.sh@21 -- # val= 00:11:03.735 11:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 11:55:09 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 11:55:09 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 11:55:09 -- accel/accel.sh@21 -- # val= 00:11:03.735 11:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 11:55:09 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 11:55:09 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 11:55:09 -- accel/accel.sh@21 -- # val= 00:11:03.735 11:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.735 11:55:09 -- accel/accel.sh@20 -- # IFS=: 00:11:03.735 11:55:09 -- accel/accel.sh@20 -- # read -r var val 00:11:03.735 11:55:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:03.735 11:55:09 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:11:03.735 11:55:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:03.735 00:11:03.735 real 0m3.245s 00:11:03.735 user 0m2.726s 00:11:03.735 sys 0m0.314s 00:11:03.735 11:55:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:03.735 ************************************ 00:11:03.735 END TEST accel_dif_generate_copy 00:11:03.735 ************************************ 00:11:03.735 11:55:09 -- common/autotest_common.sh@10 -- # set +x 00:11:03.735 11:55:09 -- accel/accel.sh@107 -- # [[ y == y ]] 00:11:03.735 11:55:09 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:03.735 11:55:09 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:11:03.735 11:55:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:03.735 11:55:09 -- common/autotest_common.sh@10 -- # set +x 00:11:03.735 ************************************ 00:11:03.735 START TEST accel_comp 00:11:03.735 ************************************ 00:11:03.735 11:55:09 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:03.735 11:55:09 -- accel/accel.sh@16 -- # local accel_opc 00:11:03.735 11:55:09 -- accel/accel.sh@17 -- # local accel_module 00:11:03.735 11:55:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:03.735 11:55:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:03.735 11:55:09 -- accel/accel.sh@12 -- # build_accel_config 00:11:03.735 11:55:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:03.735 11:55:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.735 11:55:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.735 11:55:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:03.735 11:55:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:03.735 11:55:09 -- accel/accel.sh@41 -- # local IFS=, 00:11:03.735 11:55:09 -- accel/accel.sh@42 -- # jq -r . 00:11:03.735 [2024-11-29 11:55:09.119854] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:03.735 [2024-11-29 11:55:09.120025] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68785 ] 00:11:03.994 [2024-11-29 11:55:09.262207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.994 [2024-11-29 11:55:09.391419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.422 11:55:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:05.422 00:11:05.422 SPDK Configuration: 00:11:05.422 Core mask: 0x1 00:11:05.422 00:11:05.422 Accel Perf Configuration: 00:11:05.422 Workload Type: compress 00:11:05.422 Transfer size: 4096 bytes 00:11:05.422 Vector count 1 00:11:05.422 Module: software 00:11:05.422 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:05.422 Queue depth: 32 00:11:05.422 Allocate depth: 32 00:11:05.422 # threads/core: 1 00:11:05.422 Run time: 1 seconds 00:11:05.422 Verify: No 00:11:05.422 00:11:05.422 Running for 1 seconds... 00:11:05.422 00:11:05.422 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:05.422 ------------------------------------------------------------------------------------ 00:11:05.422 0,0 46784/s 194 MiB/s 0 0 00:11:05.422 ==================================================================================== 00:11:05.422 Total 46784/s 182 MiB/s 0 0' 00:11:05.422 11:55:10 -- accel/accel.sh@20 -- # IFS=: 00:11:05.422 11:55:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:05.422 11:55:10 -- accel/accel.sh@20 -- # read -r var val 00:11:05.422 11:55:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:05.422 11:55:10 -- accel/accel.sh@12 -- # build_accel_config 00:11:05.422 11:55:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:05.422 11:55:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:05.422 11:55:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:05.422 11:55:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:05.422 11:55:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:05.422 11:55:10 -- accel/accel.sh@41 -- # local IFS=, 00:11:05.422 11:55:10 -- accel/accel.sh@42 -- # jq -r . 00:11:05.422 [2024-11-29 11:55:10.721382] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:05.422 [2024-11-29 11:55:10.721498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68804 ] 00:11:05.422 [2024-11-29 11:55:10.858133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.681 [2024-11-29 11:55:10.987568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val= 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val= 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val= 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val=0x1 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val= 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val= 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val=compress 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@24 -- # accel_opc=compress 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val= 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val=software 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@23 -- # accel_module=software 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val=32 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val=32 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val=1 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.681 11:55:11 -- accel/accel.sh@21 -- # val=No 00:11:05.681 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.681 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.682 11:55:11 -- accel/accel.sh@21 -- # val= 00:11:05.682 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.682 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.682 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:05.682 11:55:11 -- accel/accel.sh@21 -- # val= 00:11:05.682 11:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.682 11:55:11 -- accel/accel.sh@20 -- # IFS=: 00:11:05.682 11:55:11 -- accel/accel.sh@20 -- # read -r var val 00:11:07.058 11:55:12 -- accel/accel.sh@21 -- # val= 00:11:07.058 11:55:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.058 11:55:12 -- accel/accel.sh@20 -- # IFS=: 00:11:07.058 11:55:12 -- accel/accel.sh@20 -- # read -r var val 00:11:07.058 11:55:12 -- accel/accel.sh@21 -- # val= 00:11:07.058 11:55:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.058 11:55:12 -- accel/accel.sh@20 -- # IFS=: 00:11:07.058 11:55:12 -- accel/accel.sh@20 -- # read -r var val 00:11:07.058 11:55:12 -- accel/accel.sh@21 -- # val= 00:11:07.058 11:55:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.058 11:55:12 -- accel/accel.sh@20 -- # IFS=: 00:11:07.058 11:55:12 -- accel/accel.sh@20 -- # read -r var val 00:11:07.058 11:55:12 -- accel/accel.sh@21 -- # val= 00:11:07.058 11:55:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.058 11:55:12 -- accel/accel.sh@20 -- # IFS=: 00:11:07.058 11:55:12 -- accel/accel.sh@20 -- # read -r var val 00:11:07.058 11:55:12 -- accel/accel.sh@21 -- # val= 00:11:07.058 11:55:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.058 11:55:12 -- accel/accel.sh@20 -- # IFS=: 00:11:07.058 11:55:12 -- accel/accel.sh@20 -- # read -r var val 00:11:07.058 11:55:12 -- accel/accel.sh@21 -- # val= 00:11:07.058 11:55:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.058 11:55:12 -- accel/accel.sh@20 -- # IFS=: 00:11:07.058 11:55:12 -- accel/accel.sh@20 -- # read -r var val 00:11:07.058 11:55:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:07.058 11:55:12 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:11:07.058 11:55:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:07.058 00:11:07.058 real 0m3.206s 00:11:07.058 user 0m2.678s 00:11:07.058 sys 0m0.324s 00:11:07.058 11:55:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:07.058 ************************************ 00:11:07.058 END TEST accel_comp 00:11:07.058 ************************************ 00:11:07.058 11:55:12 -- common/autotest_common.sh@10 -- # set +x 00:11:07.058 11:55:12 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:07.058 11:55:12 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:11:07.058 11:55:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:07.058 11:55:12 -- common/autotest_common.sh@10 -- # set +x 00:11:07.058 ************************************ 00:11:07.058 START TEST accel_decomp 00:11:07.058 ************************************ 00:11:07.059 11:55:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:07.059 11:55:12 -- accel/accel.sh@16 -- # local accel_opc 00:11:07.059 11:55:12 -- accel/accel.sh@17 -- # local accel_module 00:11:07.059 11:55:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:07.059 11:55:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:07.059 11:55:12 -- accel/accel.sh@12 -- # build_accel_config 00:11:07.059 11:55:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:07.059 11:55:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:07.059 11:55:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:07.059 11:55:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:07.059 11:55:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:07.059 11:55:12 -- accel/accel.sh@41 -- # local IFS=, 00:11:07.059 11:55:12 -- accel/accel.sh@42 -- # jq -r . 00:11:07.059 [2024-11-29 11:55:12.375247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:07.059 [2024-11-29 11:55:12.375453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68839 ] 00:11:07.059 [2024-11-29 11:55:12.518182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.317 [2024-11-29 11:55:12.658672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.693 11:55:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:08.694 00:11:08.694 SPDK Configuration: 00:11:08.694 Core mask: 0x1 00:11:08.694 00:11:08.694 Accel Perf Configuration: 00:11:08.694 Workload Type: decompress 00:11:08.694 Transfer size: 4096 bytes 00:11:08.694 Vector count 1 00:11:08.694 Module: software 00:11:08.694 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:08.694 Queue depth: 32 00:11:08.694 Allocate depth: 32 00:11:08.694 # threads/core: 1 00:11:08.694 Run time: 1 seconds 00:11:08.694 Verify: Yes 00:11:08.694 00:11:08.694 Running for 1 seconds... 00:11:08.694 00:11:08.694 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:08.694 ------------------------------------------------------------------------------------ 00:11:08.694 0,0 66944/s 123 MiB/s 0 0 00:11:08.694 ==================================================================================== 00:11:08.694 Total 66944/s 261 MiB/s 0 0' 00:11:08.694 11:55:13 -- accel/accel.sh@20 -- # IFS=: 00:11:08.694 11:55:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:08.694 11:55:13 -- accel/accel.sh@20 -- # read -r var val 00:11:08.694 11:55:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:08.694 11:55:13 -- accel/accel.sh@12 -- # build_accel_config 00:11:08.694 11:55:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:08.694 11:55:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:08.694 11:55:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:08.694 11:55:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:08.694 11:55:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:08.694 11:55:13 -- accel/accel.sh@41 -- # local IFS=, 00:11:08.694 11:55:13 -- accel/accel.sh@42 -- # jq -r . 00:11:08.694 [2024-11-29 11:55:14.007738] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:08.694 [2024-11-29 11:55:14.007916] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68864 ] 00:11:08.694 [2024-11-29 11:55:14.148904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.953 [2024-11-29 11:55:14.282266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val= 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val= 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val= 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val=0x1 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val= 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val= 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val=decompress 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val= 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val=software 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@23 -- # accel_module=software 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val=32 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val=32 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val=1 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val=Yes 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val= 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:08.953 11:55:14 -- accel/accel.sh@21 -- # val= 00:11:08.953 11:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # IFS=: 00:11:08.953 11:55:14 -- accel/accel.sh@20 -- # read -r var val 00:11:10.329 11:55:15 -- accel/accel.sh@21 -- # val= 00:11:10.329 11:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.329 11:55:15 -- accel/accel.sh@20 -- # IFS=: 00:11:10.329 11:55:15 -- accel/accel.sh@20 -- # read -r var val 00:11:10.329 11:55:15 -- accel/accel.sh@21 -- # val= 00:11:10.329 11:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.329 11:55:15 -- accel/accel.sh@20 -- # IFS=: 00:11:10.329 11:55:15 -- accel/accel.sh@20 -- # read -r var val 00:11:10.329 11:55:15 -- accel/accel.sh@21 -- # val= 00:11:10.329 11:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.329 11:55:15 -- accel/accel.sh@20 -- # IFS=: 00:11:10.329 11:55:15 -- accel/accel.sh@20 -- # read -r var val 00:11:10.329 11:55:15 -- accel/accel.sh@21 -- # val= 00:11:10.329 11:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.329 11:55:15 -- accel/accel.sh@20 -- # IFS=: 00:11:10.329 11:55:15 -- accel/accel.sh@20 -- # read -r var val 00:11:10.329 11:55:15 -- accel/accel.sh@21 -- # val= 00:11:10.329 11:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.329 11:55:15 -- accel/accel.sh@20 -- # IFS=: 00:11:10.329 11:55:15 -- accel/accel.sh@20 -- # read -r var val 00:11:10.329 11:55:15 -- accel/accel.sh@21 -- # val= 00:11:10.329 11:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.329 11:55:15 -- accel/accel.sh@20 -- # IFS=: 00:11:10.329 11:55:15 -- accel/accel.sh@20 -- # read -r var val 00:11:10.329 11:55:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:10.329 11:55:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:10.329 11:55:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:10.329 00:11:10.329 real 0m3.255s 00:11:10.329 user 0m2.721s 00:11:10.329 sys 0m0.326s 00:11:10.329 11:55:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:10.329 11:55:15 -- common/autotest_common.sh@10 -- # set +x 00:11:10.329 ************************************ 00:11:10.329 END TEST accel_decomp 00:11:10.329 ************************************ 00:11:10.329 11:55:15 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:10.329 11:55:15 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:10.329 11:55:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.329 11:55:15 -- common/autotest_common.sh@10 -- # set +x 00:11:10.329 ************************************ 00:11:10.329 START TEST accel_decmop_full 00:11:10.329 ************************************ 00:11:10.329 11:55:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:10.329 11:55:15 -- accel/accel.sh@16 -- # local accel_opc 00:11:10.329 11:55:15 -- accel/accel.sh@17 -- # local accel_module 00:11:10.329 11:55:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:10.329 11:55:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:10.329 11:55:15 -- accel/accel.sh@12 -- # build_accel_config 00:11:10.329 11:55:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:10.329 11:55:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:10.329 11:55:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:10.329 11:55:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:10.329 11:55:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:10.329 11:55:15 -- accel/accel.sh@41 -- # local IFS=, 00:11:10.329 11:55:15 -- accel/accel.sh@42 -- # jq -r . 00:11:10.329 [2024-11-29 11:55:15.694710] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:10.329 [2024-11-29 11:55:15.694883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68893 ] 00:11:10.329 [2024-11-29 11:55:15.835111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.589 [2024-11-29 11:55:15.968206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.966 11:55:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:11.966 00:11:11.966 SPDK Configuration: 00:11:11.966 Core mask: 0x1 00:11:11.966 00:11:11.966 Accel Perf Configuration: 00:11:11.966 Workload Type: decompress 00:11:11.966 Transfer size: 111250 bytes 00:11:11.966 Vector count 1 00:11:11.966 Module: software 00:11:11.966 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:11.966 Queue depth: 32 00:11:11.966 Allocate depth: 32 00:11:11.966 # threads/core: 1 00:11:11.966 Run time: 1 seconds 00:11:11.966 Verify: Yes 00:11:11.966 00:11:11.966 Running for 1 seconds... 00:11:11.966 00:11:11.966 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:11.966 ------------------------------------------------------------------------------------ 00:11:11.966 0,0 4416/s 182 MiB/s 0 0 00:11:11.966 ==================================================================================== 00:11:11.966 Total 4416/s 468 MiB/s 0 0' 00:11:11.966 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:11.966 11:55:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:11.966 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:11.966 11:55:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:11.966 11:55:17 -- accel/accel.sh@12 -- # build_accel_config 00:11:11.966 11:55:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:11.966 11:55:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:11.966 11:55:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:11.966 11:55:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:11.966 11:55:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:11.966 11:55:17 -- accel/accel.sh@41 -- # local IFS=, 00:11:11.966 11:55:17 -- accel/accel.sh@42 -- # jq -r . 00:11:11.966 [2024-11-29 11:55:17.314075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:11.966 [2024-11-29 11:55:17.314525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68918 ] 00:11:11.966 [2024-11-29 11:55:17.460721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.225 [2024-11-29 11:55:17.607647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val= 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val= 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val= 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val=0x1 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val= 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val= 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val=decompress 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val= 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val=software 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@23 -- # accel_module=software 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val=32 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val=32 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val=1 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val=Yes 00:11:12.225 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.225 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.225 11:55:17 -- accel/accel.sh@21 -- # val= 00:11:12.226 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.226 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.226 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:12.226 11:55:17 -- accel/accel.sh@21 -- # val= 00:11:12.226 11:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.226 11:55:17 -- accel/accel.sh@20 -- # IFS=: 00:11:12.226 11:55:17 -- accel/accel.sh@20 -- # read -r var val 00:11:13.605 11:55:18 -- accel/accel.sh@21 -- # val= 00:11:13.605 11:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.605 11:55:18 -- accel/accel.sh@20 -- # IFS=: 00:11:13.605 11:55:18 -- accel/accel.sh@20 -- # read -r var val 00:11:13.605 11:55:18 -- accel/accel.sh@21 -- # val= 00:11:13.605 11:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.605 11:55:18 -- accel/accel.sh@20 -- # IFS=: 00:11:13.605 11:55:18 -- accel/accel.sh@20 -- # read -r var val 00:11:13.605 11:55:18 -- accel/accel.sh@21 -- # val= 00:11:13.605 11:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.605 11:55:18 -- accel/accel.sh@20 -- # IFS=: 00:11:13.605 11:55:18 -- accel/accel.sh@20 -- # read -r var val 00:11:13.605 11:55:18 -- accel/accel.sh@21 -- # val= 00:11:13.605 11:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.605 11:55:18 -- accel/accel.sh@20 -- # IFS=: 00:11:13.605 11:55:18 -- accel/accel.sh@20 -- # read -r var val 00:11:13.605 11:55:18 -- accel/accel.sh@21 -- # val= 00:11:13.605 11:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.605 11:55:18 -- accel/accel.sh@20 -- # IFS=: 00:11:13.605 11:55:18 -- accel/accel.sh@20 -- # read -r var val 00:11:13.605 11:55:18 -- accel/accel.sh@21 -- # val= 00:11:13.605 11:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.605 11:55:18 -- accel/accel.sh@20 -- # IFS=: 00:11:13.605 11:55:18 -- accel/accel.sh@20 -- # read -r var val 00:11:13.605 11:55:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:13.605 11:55:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:13.605 11:55:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:13.605 00:11:13.605 real 0m3.304s 00:11:13.605 user 0m2.764s 00:11:13.605 sys 0m0.333s 00:11:13.605 11:55:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:13.605 11:55:18 -- common/autotest_common.sh@10 -- # set +x 00:11:13.605 ************************************ 00:11:13.605 END TEST accel_decmop_full 00:11:13.605 ************************************ 00:11:13.605 11:55:19 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:13.605 11:55:19 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:13.605 11:55:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:13.605 11:55:19 -- common/autotest_common.sh@10 -- # set +x 00:11:13.605 ************************************ 00:11:13.605 START TEST accel_decomp_mcore 00:11:13.605 ************************************ 00:11:13.605 11:55:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:13.605 11:55:19 -- accel/accel.sh@16 -- # local accel_opc 00:11:13.605 11:55:19 -- accel/accel.sh@17 -- # local accel_module 00:11:13.605 11:55:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:13.605 11:55:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:13.605 11:55:19 -- accel/accel.sh@12 -- # build_accel_config 00:11:13.605 11:55:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:13.605 11:55:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:13.605 11:55:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:13.605 11:55:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:13.605 11:55:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:13.605 11:55:19 -- accel/accel.sh@41 -- # local IFS=, 00:11:13.605 11:55:19 -- accel/accel.sh@42 -- # jq -r . 00:11:13.605 [2024-11-29 11:55:19.053789] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:13.605 [2024-11-29 11:55:19.054112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68957 ] 00:11:13.864 [2024-11-29 11:55:19.189325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.864 [2024-11-29 11:55:19.319702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.864 [2024-11-29 11:55:19.319865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.864 [2024-11-29 11:55:19.320001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.864 [2024-11-29 11:55:19.320260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.242 11:55:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:15.242 00:11:15.242 SPDK Configuration: 00:11:15.242 Core mask: 0xf 00:11:15.242 00:11:15.242 Accel Perf Configuration: 00:11:15.242 Workload Type: decompress 00:11:15.242 Transfer size: 4096 bytes 00:11:15.242 Vector count 1 00:11:15.242 Module: software 00:11:15.242 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:15.242 Queue depth: 32 00:11:15.242 Allocate depth: 32 00:11:15.242 # threads/core: 1 00:11:15.242 Run time: 1 seconds 00:11:15.242 Verify: Yes 00:11:15.242 00:11:15.242 Running for 1 seconds... 00:11:15.242 00:11:15.242 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:15.242 ------------------------------------------------------------------------------------ 00:11:15.242 0,0 50432/s 92 MiB/s 0 0 00:11:15.242 3,0 49984/s 92 MiB/s 0 0 00:11:15.242 2,0 49152/s 90 MiB/s 0 0 00:11:15.242 1,0 49728/s 91 MiB/s 0 0 00:11:15.242 ==================================================================================== 00:11:15.242 Total 199296/s 778 MiB/s 0 0' 00:11:15.242 11:55:20 -- accel/accel.sh@20 -- # IFS=: 00:11:15.242 11:55:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:15.242 11:55:20 -- accel/accel.sh@20 -- # read -r var val 00:11:15.242 11:55:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:15.242 11:55:20 -- accel/accel.sh@12 -- # build_accel_config 00:11:15.242 11:55:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:15.242 11:55:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:15.242 11:55:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:15.242 11:55:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:15.242 11:55:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:15.242 11:55:20 -- accel/accel.sh@41 -- # local IFS=, 00:11:15.242 11:55:20 -- accel/accel.sh@42 -- # jq -r . 00:11:15.242 [2024-11-29 11:55:20.712297] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:15.242 [2024-11-29 11:55:20.712706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68975 ] 00:11:15.502 [2024-11-29 11:55:20.850727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.502 [2024-11-29 11:55:20.986243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.502 [2024-11-29 11:55:20.986388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.502 [2024-11-29 11:55:20.986496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.502 [2024-11-29 11:55:20.986500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val= 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val= 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val= 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val=0xf 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val= 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val= 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val=decompress 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val= 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val=software 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@23 -- # accel_module=software 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val=32 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val=32 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val=1 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val=Yes 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val= 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:15.762 11:55:21 -- accel/accel.sh@21 -- # val= 00:11:15.762 11:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # IFS=: 00:11:15.762 11:55:21 -- accel/accel.sh@20 -- # read -r var val 00:11:17.144 11:55:22 -- accel/accel.sh@21 -- # val= 00:11:17.144 11:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # IFS=: 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # read -r var val 00:11:17.144 11:55:22 -- accel/accel.sh@21 -- # val= 00:11:17.144 11:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # IFS=: 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # read -r var val 00:11:17.144 11:55:22 -- accel/accel.sh@21 -- # val= 00:11:17.144 11:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # IFS=: 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # read -r var val 00:11:17.144 11:55:22 -- accel/accel.sh@21 -- # val= 00:11:17.144 11:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # IFS=: 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # read -r var val 00:11:17.144 11:55:22 -- accel/accel.sh@21 -- # val= 00:11:17.144 11:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # IFS=: 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # read -r var val 00:11:17.144 11:55:22 -- accel/accel.sh@21 -- # val= 00:11:17.144 11:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # IFS=: 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # read -r var val 00:11:17.144 11:55:22 -- accel/accel.sh@21 -- # val= 00:11:17.144 11:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # IFS=: 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # read -r var val 00:11:17.144 11:55:22 -- accel/accel.sh@21 -- # val= 00:11:17.144 11:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # IFS=: 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # read -r var val 00:11:17.144 11:55:22 -- accel/accel.sh@21 -- # val= 00:11:17.144 11:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # IFS=: 00:11:17.144 11:55:22 -- accel/accel.sh@20 -- # read -r var val 00:11:17.144 11:55:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:17.144 11:55:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:17.144 ************************************ 00:11:17.144 END TEST accel_decomp_mcore 00:11:17.144 ************************************ 00:11:17.144 11:55:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:17.144 00:11:17.144 real 0m3.282s 00:11:17.144 user 0m4.991s 00:11:17.144 sys 0m0.173s 00:11:17.144 11:55:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:17.144 11:55:22 -- common/autotest_common.sh@10 -- # set +x 00:11:17.144 11:55:22 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:17.144 11:55:22 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:17.144 11:55:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:17.144 11:55:22 -- common/autotest_common.sh@10 -- # set +x 00:11:17.144 ************************************ 00:11:17.144 START TEST accel_decomp_full_mcore 00:11:17.144 ************************************ 00:11:17.144 11:55:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:17.144 11:55:22 -- accel/accel.sh@16 -- # local accel_opc 00:11:17.144 11:55:22 -- accel/accel.sh@17 -- # local accel_module 00:11:17.144 11:55:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:17.144 11:55:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:17.144 11:55:22 -- accel/accel.sh@12 -- # build_accel_config 00:11:17.144 11:55:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:17.144 11:55:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:17.144 11:55:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:17.144 11:55:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:17.144 11:55:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:17.144 11:55:22 -- accel/accel.sh@41 -- # local IFS=, 00:11:17.144 11:55:22 -- accel/accel.sh@42 -- # jq -r . 00:11:17.144 [2024-11-29 11:55:22.389826] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:17.144 [2024-11-29 11:55:22.389947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69018 ] 00:11:17.144 [2024-11-29 11:55:22.530239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.409 [2024-11-29 11:55:22.664753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.409 [2024-11-29 11:55:22.664914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.409 [2024-11-29 11:55:22.665030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.409 [2024-11-29 11:55:22.665223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.822 11:55:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:18.822 00:11:18.822 SPDK Configuration: 00:11:18.822 Core mask: 0xf 00:11:18.822 00:11:18.822 Accel Perf Configuration: 00:11:18.822 Workload Type: decompress 00:11:18.822 Transfer size: 111250 bytes 00:11:18.822 Vector count 1 00:11:18.822 Module: software 00:11:18.822 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:18.822 Queue depth: 32 00:11:18.822 Allocate depth: 32 00:11:18.822 # threads/core: 1 00:11:18.822 Run time: 1 seconds 00:11:18.822 Verify: Yes 00:11:18.822 00:11:18.822 Running for 1 seconds... 00:11:18.822 00:11:18.822 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:18.822 ------------------------------------------------------------------------------------ 00:11:18.822 0,0 3872/s 159 MiB/s 0 0 00:11:18.822 3,0 4416/s 182 MiB/s 0 0 00:11:18.822 2,0 4192/s 173 MiB/s 0 0 00:11:18.822 1,0 4288/s 177 MiB/s 0 0 00:11:18.822 ==================================================================================== 00:11:18.822 Total 16768/s 1779 MiB/s 0 0' 00:11:18.822 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:18.822 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:18.822 11:55:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:18.822 11:55:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:18.822 11:55:24 -- accel/accel.sh@12 -- # build_accel_config 00:11:18.822 11:55:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:18.822 11:55:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:18.822 11:55:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:18.822 11:55:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:18.822 11:55:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:18.822 11:55:24 -- accel/accel.sh@41 -- # local IFS=, 00:11:18.822 11:55:24 -- accel/accel.sh@42 -- # jq -r . 00:11:18.822 [2024-11-29 11:55:24.028706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:18.822 [2024-11-29 11:55:24.028820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69035 ] 00:11:18.822 [2024-11-29 11:55:24.161678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.822 [2024-11-29 11:55:24.296377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.822 [2024-11-29 11:55:24.296561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.822 [2024-11-29 11:55:24.296664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.822 [2024-11-29 11:55:24.296973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val= 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val= 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val= 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val=0xf 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val= 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val= 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val=decompress 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val= 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val=software 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@23 -- # accel_module=software 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val=32 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val=32 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val=1 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val=Yes 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val= 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.082 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:19.082 11:55:24 -- accel/accel.sh@21 -- # val= 00:11:19.082 11:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.083 11:55:24 -- accel/accel.sh@20 -- # IFS=: 00:11:19.083 11:55:24 -- accel/accel.sh@20 -- # read -r var val 00:11:20.459 11:55:25 -- accel/accel.sh@21 -- # val= 00:11:20.459 11:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # IFS=: 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # read -r var val 00:11:20.459 11:55:25 -- accel/accel.sh@21 -- # val= 00:11:20.459 11:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # IFS=: 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # read -r var val 00:11:20.459 11:55:25 -- accel/accel.sh@21 -- # val= 00:11:20.459 11:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # IFS=: 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # read -r var val 00:11:20.459 11:55:25 -- accel/accel.sh@21 -- # val= 00:11:20.459 11:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # IFS=: 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # read -r var val 00:11:20.459 11:55:25 -- accel/accel.sh@21 -- # val= 00:11:20.459 11:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # IFS=: 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # read -r var val 00:11:20.459 11:55:25 -- accel/accel.sh@21 -- # val= 00:11:20.459 11:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # IFS=: 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # read -r var val 00:11:20.459 11:55:25 -- accel/accel.sh@21 -- # val= 00:11:20.459 11:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # IFS=: 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # read -r var val 00:11:20.459 11:55:25 -- accel/accel.sh@21 -- # val= 00:11:20.459 11:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # IFS=: 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # read -r var val 00:11:20.459 11:55:25 -- accel/accel.sh@21 -- # val= 00:11:20.459 11:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # IFS=: 00:11:20.459 11:55:25 -- accel/accel.sh@20 -- # read -r var val 00:11:20.460 11:55:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:20.460 11:55:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:20.460 11:55:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:20.460 00:11:20.460 real 0m3.201s 00:11:20.460 user 0m9.804s 00:11:20.460 sys 0m0.350s 00:11:20.460 11:55:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:20.460 11:55:25 -- common/autotest_common.sh@10 -- # set +x 00:11:20.460 ************************************ 00:11:20.460 END TEST accel_decomp_full_mcore 00:11:20.460 ************************************ 00:11:20.460 11:55:25 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:20.460 11:55:25 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:20.460 11:55:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:20.460 11:55:25 -- common/autotest_common.sh@10 -- # set +x 00:11:20.460 ************************************ 00:11:20.460 START TEST accel_decomp_mthread 00:11:20.460 ************************************ 00:11:20.460 11:55:25 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:20.460 11:55:25 -- accel/accel.sh@16 -- # local accel_opc 00:11:20.460 11:55:25 -- accel/accel.sh@17 -- # local accel_module 00:11:20.460 11:55:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:20.460 11:55:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:20.460 11:55:25 -- accel/accel.sh@12 -- # build_accel_config 00:11:20.460 11:55:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:20.460 11:55:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:20.460 11:55:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:20.460 11:55:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:20.460 11:55:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:20.460 11:55:25 -- accel/accel.sh@41 -- # local IFS=, 00:11:20.460 11:55:25 -- accel/accel.sh@42 -- # jq -r . 00:11:20.460 [2024-11-29 11:55:25.634501] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:20.460 [2024-11-29 11:55:25.634617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69078 ] 00:11:20.460 [2024-11-29 11:55:25.767675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.460 [2024-11-29 11:55:25.864459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.835 11:55:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:21.835 00:11:21.835 SPDK Configuration: 00:11:21.835 Core mask: 0x1 00:11:21.835 00:11:21.835 Accel Perf Configuration: 00:11:21.835 Workload Type: decompress 00:11:21.835 Transfer size: 4096 bytes 00:11:21.835 Vector count 1 00:11:21.835 Module: software 00:11:21.835 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:21.835 Queue depth: 32 00:11:21.835 Allocate depth: 32 00:11:21.835 # threads/core: 2 00:11:21.835 Run time: 1 seconds 00:11:21.835 Verify: Yes 00:11:21.835 00:11:21.835 Running for 1 seconds... 00:11:21.835 00:11:21.835 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:21.835 ------------------------------------------------------------------------------------ 00:11:21.835 0,1 33728/s 62 MiB/s 0 0 00:11:21.835 0,0 33632/s 61 MiB/s 0 0 00:11:21.835 ==================================================================================== 00:11:21.835 Total 67360/s 263 MiB/s 0 0' 00:11:21.835 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:21.835 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:21.835 11:55:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:21.835 11:55:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:21.835 11:55:27 -- accel/accel.sh@12 -- # build_accel_config 00:11:21.835 11:55:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:21.835 11:55:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:21.835 11:55:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:21.835 11:55:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:21.835 11:55:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:21.835 11:55:27 -- accel/accel.sh@41 -- # local IFS=, 00:11:21.835 11:55:27 -- accel/accel.sh@42 -- # jq -r . 00:11:21.835 [2024-11-29 11:55:27.115704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:21.835 [2024-11-29 11:55:27.116198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69092 ] 00:11:21.835 [2024-11-29 11:55:27.253340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.093 [2024-11-29 11:55:27.352542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val= 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val= 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val= 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val=0x1 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val= 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val= 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val=decompress 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val= 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val=software 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@23 -- # accel_module=software 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val=32 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val=32 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val=2 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val=Yes 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val= 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:22.093 11:55:27 -- accel/accel.sh@21 -- # val= 00:11:22.093 11:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # IFS=: 00:11:22.093 11:55:27 -- accel/accel.sh@20 -- # read -r var val 00:11:23.468 11:55:28 -- accel/accel.sh@21 -- # val= 00:11:23.468 11:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.468 11:55:28 -- accel/accel.sh@20 -- # IFS=: 00:11:23.468 11:55:28 -- accel/accel.sh@20 -- # read -r var val 00:11:23.468 11:55:28 -- accel/accel.sh@21 -- # val= 00:11:23.468 11:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.468 11:55:28 -- accel/accel.sh@20 -- # IFS=: 00:11:23.468 11:55:28 -- accel/accel.sh@20 -- # read -r var val 00:11:23.468 11:55:28 -- accel/accel.sh@21 -- # val= 00:11:23.468 11:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.468 11:55:28 -- accel/accel.sh@20 -- # IFS=: 00:11:23.468 11:55:28 -- accel/accel.sh@20 -- # read -r var val 00:11:23.468 11:55:28 -- accel/accel.sh@21 -- # val= 00:11:23.468 11:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.468 11:55:28 -- accel/accel.sh@20 -- # IFS=: 00:11:23.468 11:55:28 -- accel/accel.sh@20 -- # read -r var val 00:11:23.468 11:55:28 -- accel/accel.sh@21 -- # val= 00:11:23.468 11:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.468 11:55:28 -- accel/accel.sh@20 -- # IFS=: 00:11:23.468 11:55:28 -- accel/accel.sh@20 -- # read -r var val 00:11:23.468 11:55:28 -- accel/accel.sh@21 -- # val= 00:11:23.468 11:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.468 11:55:28 -- accel/accel.sh@20 -- # IFS=: 00:11:23.468 11:55:28 -- accel/accel.sh@20 -- # read -r var val 00:11:23.468 ************************************ 00:11:23.468 END TEST accel_decomp_mthread 00:11:23.468 ************************************ 00:11:23.468 11:55:28 -- accel/accel.sh@21 -- # val= 00:11:23.468 11:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.468 11:55:28 -- accel/accel.sh@20 -- # IFS=: 00:11:23.468 11:55:28 -- accel/accel.sh@20 -- # read -r var val 00:11:23.468 11:55:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:23.468 11:55:28 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:23.468 11:55:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:23.468 00:11:23.468 real 0m2.961s 00:11:23.468 user 0m2.515s 00:11:23.468 sys 0m0.236s 00:11:23.468 11:55:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:23.468 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:23.468 11:55:28 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:23.468 11:55:28 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:23.468 11:55:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.468 11:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:23.468 ************************************ 00:11:23.468 START TEST accel_deomp_full_mthread 00:11:23.468 ************************************ 00:11:23.468 11:55:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:23.468 11:55:28 -- accel/accel.sh@16 -- # local accel_opc 00:11:23.468 11:55:28 -- accel/accel.sh@17 -- # local accel_module 00:11:23.468 11:55:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:23.468 11:55:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:23.468 11:55:28 -- accel/accel.sh@12 -- # build_accel_config 00:11:23.468 11:55:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:23.468 11:55:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:23.468 11:55:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:23.468 11:55:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:23.468 11:55:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:23.468 11:55:28 -- accel/accel.sh@41 -- # local IFS=, 00:11:23.468 11:55:28 -- accel/accel.sh@42 -- # jq -r . 00:11:23.468 [2024-11-29 11:55:28.653425] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:23.468 [2024-11-29 11:55:28.653824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69131 ] 00:11:23.468 [2024-11-29 11:55:28.792173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.468 [2024-11-29 11:55:28.887450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.859 11:55:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:24.859 00:11:24.859 SPDK Configuration: 00:11:24.859 Core mask: 0x1 00:11:24.859 00:11:24.859 Accel Perf Configuration: 00:11:24.859 Workload Type: decompress 00:11:24.859 Transfer size: 111250 bytes 00:11:24.859 Vector count 1 00:11:24.859 Module: software 00:11:24.859 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:24.859 Queue depth: 32 00:11:24.859 Allocate depth: 32 00:11:24.859 # threads/core: 2 00:11:24.859 Run time: 1 seconds 00:11:24.859 Verify: Yes 00:11:24.859 00:11:24.859 Running for 1 seconds... 00:11:24.859 00:11:24.859 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:24.859 ------------------------------------------------------------------------------------ 00:11:24.859 0,1 2240/s 92 MiB/s 0 0 00:11:24.859 0,0 2176/s 89 MiB/s 0 0 00:11:24.859 ==================================================================================== 00:11:24.859 Total 4416/s 468 MiB/s 0 0' 00:11:24.859 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:24.859 11:55:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:24.859 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:24.859 11:55:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:24.859 11:55:30 -- accel/accel.sh@12 -- # build_accel_config 00:11:24.859 11:55:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:24.859 11:55:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:24.859 11:55:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:24.859 11:55:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:24.859 11:55:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:24.859 11:55:30 -- accel/accel.sh@41 -- # local IFS=, 00:11:24.859 11:55:30 -- accel/accel.sh@42 -- # jq -r . 00:11:24.859 [2024-11-29 11:55:30.161566] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:24.860 [2024-11-29 11:55:30.161738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69146 ] 00:11:24.860 [2024-11-29 11:55:30.304992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.119 [2024-11-29 11:55:30.399488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val= 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val= 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val= 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val=0x1 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val= 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val= 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val=decompress 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val= 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val=software 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@23 -- # accel_module=software 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val=32 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val=32 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val=2 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val=Yes 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val= 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:25.119 11:55:30 -- accel/accel.sh@21 -- # val= 00:11:25.119 11:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # IFS=: 00:11:25.119 11:55:30 -- accel/accel.sh@20 -- # read -r var val 00:11:26.494 11:55:31 -- accel/accel.sh@21 -- # val= 00:11:26.494 11:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.494 11:55:31 -- accel/accel.sh@20 -- # IFS=: 00:11:26.494 11:55:31 -- accel/accel.sh@20 -- # read -r var val 00:11:26.494 11:55:31 -- accel/accel.sh@21 -- # val= 00:11:26.494 11:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.494 11:55:31 -- accel/accel.sh@20 -- # IFS=: 00:11:26.494 11:55:31 -- accel/accel.sh@20 -- # read -r var val 00:11:26.494 11:55:31 -- accel/accel.sh@21 -- # val= 00:11:26.494 11:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.494 11:55:31 -- accel/accel.sh@20 -- # IFS=: 00:11:26.494 11:55:31 -- accel/accel.sh@20 -- # read -r var val 00:11:26.494 11:55:31 -- accel/accel.sh@21 -- # val= 00:11:26.494 11:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.494 11:55:31 -- accel/accel.sh@20 -- # IFS=: 00:11:26.494 11:55:31 -- accel/accel.sh@20 -- # read -r var val 00:11:26.494 11:55:31 -- accel/accel.sh@21 -- # val= 00:11:26.494 11:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.494 11:55:31 -- accel/accel.sh@20 -- # IFS=: 00:11:26.494 11:55:31 -- accel/accel.sh@20 -- # read -r var val 00:11:26.494 11:55:31 -- accel/accel.sh@21 -- # val= 00:11:26.494 11:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.494 11:55:31 -- accel/accel.sh@20 -- # IFS=: 00:11:26.494 11:55:31 -- accel/accel.sh@20 -- # read -r var val 00:11:26.494 11:55:31 -- accel/accel.sh@21 -- # val= 00:11:26.494 11:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.494 11:55:31 -- accel/accel.sh@20 -- # IFS=: 00:11:26.494 11:55:31 -- accel/accel.sh@20 -- # read -r var val 00:11:26.494 11:55:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:26.494 11:55:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:26.494 11:55:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:26.494 00:11:26.494 real 0m3.013s 00:11:26.494 user 0m2.580s 00:11:26.494 sys 0m0.223s 00:11:26.494 11:55:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:26.494 11:55:31 -- common/autotest_common.sh@10 -- # set +x 00:11:26.494 ************************************ 00:11:26.494 END TEST accel_deomp_full_mthread 00:11:26.494 ************************************ 00:11:26.494 11:55:31 -- accel/accel.sh@116 -- # [[ n == y ]] 00:11:26.494 11:55:31 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:26.494 11:55:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:26.494 11:55:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:26.494 11:55:31 -- accel/accel.sh@129 -- # build_accel_config 00:11:26.494 11:55:31 -- common/autotest_common.sh@10 -- # set +x 00:11:26.494 11:55:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:26.494 11:55:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:26.494 11:55:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:26.494 11:55:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:26.494 11:55:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:26.494 11:55:31 -- accel/accel.sh@41 -- # local IFS=, 00:11:26.494 11:55:31 -- accel/accel.sh@42 -- # jq -r . 00:11:26.494 ************************************ 00:11:26.494 START TEST accel_dif_functional_tests 00:11:26.494 ************************************ 00:11:26.494 11:55:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:26.494 [2024-11-29 11:55:31.748669] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:26.494 [2024-11-29 11:55:31.748790] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69182 ] 00:11:26.494 [2024-11-29 11:55:31.887019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:26.494 [2024-11-29 11:55:31.989259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.494 [2024-11-29 11:55:31.989414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.494 [2024-11-29 11:55:31.989425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.752 00:11:26.753 00:11:26.753 CUnit - A unit testing framework for C - Version 2.1-3 00:11:26.753 http://cunit.sourceforge.net/ 00:11:26.753 00:11:26.753 00:11:26.753 Suite: accel_dif 00:11:26.753 Test: verify: DIF generated, GUARD check ...passed 00:11:26.753 Test: verify: DIF generated, APPTAG check ...passed 00:11:26.753 Test: verify: DIF generated, REFTAG check ...passed 00:11:26.753 Test: verify: DIF not generated, GUARD check ...passed 00:11:26.753 Test: verify: DIF not generated, APPTAG check ...[2024-11-29 11:55:32.087905] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:26.753 [2024-11-29 11:55:32.088101] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:26.753 passed 00:11:26.753 Test: verify: DIF not generated, REFTAG check ...[2024-11-29 11:55:32.088173] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:26.753 [2024-11-29 11:55:32.088318] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:26.753 passed 00:11:26.753 Test: verify: APPTAG correct, APPTAG check ...[2024-11-29 11:55:32.088361] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:26.753 [2024-11-29 11:55:32.088447] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:26.753 passed 00:11:26.753 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:11:26.753 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:26.753 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-11-29 11:55:32.088564] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:26.753 passed 00:11:26.753 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:26.753 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-29 11:55:32.088935] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:26.753 passed 00:11:26.753 Test: generate copy: DIF generated, GUARD check ...passed 00:11:26.753 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:26.753 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:26.753 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:26.753 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:26.753 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:26.753 Test: generate copy: iovecs-len validate ...passed 00:11:26.753 Test: generate copy: buffer alignment validate ...[2024-11-29 11:55:32.089927] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:26.753 passed 00:11:26.753 00:11:26.753 Run Summary: Type Total Ran Passed Failed Inactive 00:11:26.753 suites 1 1 n/a 0 0 00:11:26.753 tests 20 20 20 0 0 00:11:26.753 asserts 204 204 204 0 n/a 00:11:26.753 00:11:26.753 Elapsed time = 0.007 seconds 00:11:27.012 00:11:27.012 real 0m0.609s 00:11:27.012 user 0m0.825s 00:11:27.012 sys 0m0.165s 00:11:27.012 11:55:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:27.012 11:55:32 -- common/autotest_common.sh@10 -- # set +x 00:11:27.012 ************************************ 00:11:27.012 END TEST accel_dif_functional_tests 00:11:27.012 ************************************ 00:11:27.012 ************************************ 00:11:27.012 END TEST accel 00:11:27.012 ************************************ 00:11:27.012 00:11:27.012 real 1m9.374s 00:11:27.012 user 1m12.313s 00:11:27.012 sys 0m8.021s 00:11:27.012 11:55:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:27.012 11:55:32 -- common/autotest_common.sh@10 -- # set +x 00:11:27.012 11:55:32 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:27.012 11:55:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:27.012 11:55:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:27.012 11:55:32 -- common/autotest_common.sh@10 -- # set +x 00:11:27.012 ************************************ 00:11:27.012 START TEST accel_rpc 00:11:27.012 ************************************ 00:11:27.012 11:55:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:27.012 * Looking for test storage... 00:11:27.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:27.012 11:55:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:27.012 11:55:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:27.012 11:55:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:27.271 11:55:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:27.271 11:55:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:27.271 11:55:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:27.271 11:55:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:27.271 11:55:32 -- scripts/common.sh@335 -- # IFS=.-: 00:11:27.271 11:55:32 -- scripts/common.sh@335 -- # read -ra ver1 00:11:27.271 11:55:32 -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.271 11:55:32 -- scripts/common.sh@336 -- # read -ra ver2 00:11:27.271 11:55:32 -- scripts/common.sh@337 -- # local 'op=<' 00:11:27.271 11:55:32 -- scripts/common.sh@339 -- # ver1_l=2 00:11:27.271 11:55:32 -- scripts/common.sh@340 -- # ver2_l=1 00:11:27.271 11:55:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:27.271 11:55:32 -- scripts/common.sh@343 -- # case "$op" in 00:11:27.271 11:55:32 -- scripts/common.sh@344 -- # : 1 00:11:27.271 11:55:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:27.271 11:55:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.271 11:55:32 -- scripts/common.sh@364 -- # decimal 1 00:11:27.271 11:55:32 -- scripts/common.sh@352 -- # local d=1 00:11:27.271 11:55:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.271 11:55:32 -- scripts/common.sh@354 -- # echo 1 00:11:27.271 11:55:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:27.271 11:55:32 -- scripts/common.sh@365 -- # decimal 2 00:11:27.271 11:55:32 -- scripts/common.sh@352 -- # local d=2 00:11:27.271 11:55:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.271 11:55:32 -- scripts/common.sh@354 -- # echo 2 00:11:27.271 11:55:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:27.271 11:55:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:27.271 11:55:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:27.271 11:55:32 -- scripts/common.sh@367 -- # return 0 00:11:27.271 11:55:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.271 11:55:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:27.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.271 --rc genhtml_branch_coverage=1 00:11:27.271 --rc genhtml_function_coverage=1 00:11:27.271 --rc genhtml_legend=1 00:11:27.271 --rc geninfo_all_blocks=1 00:11:27.271 --rc geninfo_unexecuted_blocks=1 00:11:27.271 00:11:27.271 ' 00:11:27.271 11:55:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:27.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.271 --rc genhtml_branch_coverage=1 00:11:27.271 --rc genhtml_function_coverage=1 00:11:27.271 --rc genhtml_legend=1 00:11:27.271 --rc geninfo_all_blocks=1 00:11:27.271 --rc geninfo_unexecuted_blocks=1 00:11:27.271 00:11:27.271 ' 00:11:27.271 11:55:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:27.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.271 --rc genhtml_branch_coverage=1 00:11:27.271 --rc genhtml_function_coverage=1 00:11:27.271 --rc genhtml_legend=1 00:11:27.271 --rc geninfo_all_blocks=1 00:11:27.271 --rc geninfo_unexecuted_blocks=1 00:11:27.271 00:11:27.271 ' 00:11:27.271 11:55:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:27.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.271 --rc genhtml_branch_coverage=1 00:11:27.271 --rc genhtml_function_coverage=1 00:11:27.271 --rc genhtml_legend=1 00:11:27.271 --rc geninfo_all_blocks=1 00:11:27.271 --rc geninfo_unexecuted_blocks=1 00:11:27.271 00:11:27.271 ' 00:11:27.271 11:55:32 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:27.271 11:55:32 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=69259 00:11:27.271 11:55:32 -- accel/accel_rpc.sh@15 -- # waitforlisten 69259 00:11:27.271 11:55:32 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:27.271 11:55:32 -- common/autotest_common.sh@829 -- # '[' -z 69259 ']' 00:11:27.271 11:55:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.271 11:55:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.271 11:55:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.271 11:55:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.271 11:55:32 -- common/autotest_common.sh@10 -- # set +x 00:11:27.271 [2024-11-29 11:55:32.658839] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:27.271 [2024-11-29 11:55:32.659196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69259 ] 00:11:27.531 [2024-11-29 11:55:32.795538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.531 [2024-11-29 11:55:32.894208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:27.531 [2024-11-29 11:55:32.894600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.531 11:55:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.531 11:55:32 -- common/autotest_common.sh@862 -- # return 0 00:11:27.531 11:55:32 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:27.531 11:55:32 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:27.531 11:55:32 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:27.531 11:55:32 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:27.531 11:55:32 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:27.531 11:55:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:27.531 11:55:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:27.531 11:55:32 -- common/autotest_common.sh@10 -- # set +x 00:11:27.531 ************************************ 00:11:27.531 START TEST accel_assign_opcode 00:11:27.531 ************************************ 00:11:27.531 11:55:32 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:11:27.531 11:55:32 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:27.531 11:55:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.531 11:55:32 -- common/autotest_common.sh@10 -- # set +x 00:11:27.531 [2024-11-29 11:55:32.943168] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:27.531 11:55:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.531 11:55:32 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:27.531 11:55:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.531 11:55:32 -- common/autotest_common.sh@10 -- # set +x 00:11:27.531 [2024-11-29 11:55:32.955200] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:27.531 11:55:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.531 11:55:32 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:27.531 11:55:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.531 11:55:32 -- common/autotest_common.sh@10 -- # set +x 00:11:27.790 11:55:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.790 11:55:33 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:27.790 11:55:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.790 11:55:33 -- common/autotest_common.sh@10 -- # set +x 00:11:27.790 11:55:33 -- accel/accel_rpc.sh@42 -- # grep software 00:11:27.790 11:55:33 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:27.790 11:55:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.790 software 00:11:27.790 00:11:27.790 real 0m0.317s 00:11:27.790 user 0m0.065s 00:11:27.790 sys 0m0.007s 00:11:27.790 ************************************ 00:11:27.790 END TEST accel_assign_opcode 00:11:27.790 ************************************ 00:11:27.790 11:55:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:27.790 11:55:33 -- common/autotest_common.sh@10 -- # set +x 00:11:28.049 11:55:33 -- accel/accel_rpc.sh@55 -- # killprocess 69259 00:11:28.050 11:55:33 -- common/autotest_common.sh@936 -- # '[' -z 69259 ']' 00:11:28.050 11:55:33 -- common/autotest_common.sh@940 -- # kill -0 69259 00:11:28.050 11:55:33 -- common/autotest_common.sh@941 -- # uname 00:11:28.050 11:55:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:28.050 11:55:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69259 00:11:28.050 killing process with pid 69259 00:11:28.050 11:55:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:28.050 11:55:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:28.050 11:55:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69259' 00:11:28.050 11:55:33 -- common/autotest_common.sh@955 -- # kill 69259 00:11:28.050 11:55:33 -- common/autotest_common.sh@960 -- # wait 69259 00:11:28.308 00:11:28.309 real 0m1.336s 00:11:28.309 user 0m1.225s 00:11:28.309 sys 0m0.457s 00:11:28.309 11:55:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:28.309 11:55:33 -- common/autotest_common.sh@10 -- # set +x 00:11:28.309 ************************************ 00:11:28.309 END TEST accel_rpc 00:11:28.309 ************************************ 00:11:28.309 11:55:33 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:28.309 11:55:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:28.309 11:55:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:28.309 11:55:33 -- common/autotest_common.sh@10 -- # set +x 00:11:28.309 ************************************ 00:11:28.309 START TEST app_cmdline 00:11:28.309 ************************************ 00:11:28.309 11:55:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:28.567 * Looking for test storage... 00:11:28.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:28.567 11:55:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:28.567 11:55:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:28.567 11:55:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:28.567 11:55:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:28.567 11:55:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:28.568 11:55:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:28.568 11:55:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:28.568 11:55:33 -- scripts/common.sh@335 -- # IFS=.-: 00:11:28.568 11:55:33 -- scripts/common.sh@335 -- # read -ra ver1 00:11:28.568 11:55:33 -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.568 11:55:33 -- scripts/common.sh@336 -- # read -ra ver2 00:11:28.568 11:55:33 -- scripts/common.sh@337 -- # local 'op=<' 00:11:28.568 11:55:33 -- scripts/common.sh@339 -- # ver1_l=2 00:11:28.568 11:55:33 -- scripts/common.sh@340 -- # ver2_l=1 00:11:28.568 11:55:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:28.568 11:55:33 -- scripts/common.sh@343 -- # case "$op" in 00:11:28.568 11:55:33 -- scripts/common.sh@344 -- # : 1 00:11:28.568 11:55:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:28.568 11:55:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.568 11:55:33 -- scripts/common.sh@364 -- # decimal 1 00:11:28.568 11:55:33 -- scripts/common.sh@352 -- # local d=1 00:11:28.568 11:55:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.568 11:55:33 -- scripts/common.sh@354 -- # echo 1 00:11:28.568 11:55:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:28.568 11:55:33 -- scripts/common.sh@365 -- # decimal 2 00:11:28.568 11:55:33 -- scripts/common.sh@352 -- # local d=2 00:11:28.568 11:55:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.568 11:55:33 -- scripts/common.sh@354 -- # echo 2 00:11:28.568 11:55:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:28.568 11:55:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:28.568 11:55:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:28.568 11:55:33 -- scripts/common.sh@367 -- # return 0 00:11:28.568 11:55:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.568 11:55:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:28.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.568 --rc genhtml_branch_coverage=1 00:11:28.568 --rc genhtml_function_coverage=1 00:11:28.568 --rc genhtml_legend=1 00:11:28.568 --rc geninfo_all_blocks=1 00:11:28.568 --rc geninfo_unexecuted_blocks=1 00:11:28.568 00:11:28.568 ' 00:11:28.568 11:55:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:28.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.568 --rc genhtml_branch_coverage=1 00:11:28.568 --rc genhtml_function_coverage=1 00:11:28.568 --rc genhtml_legend=1 00:11:28.568 --rc geninfo_all_blocks=1 00:11:28.568 --rc geninfo_unexecuted_blocks=1 00:11:28.568 00:11:28.568 ' 00:11:28.568 11:55:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:28.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.568 --rc genhtml_branch_coverage=1 00:11:28.568 --rc genhtml_function_coverage=1 00:11:28.568 --rc genhtml_legend=1 00:11:28.568 --rc geninfo_all_blocks=1 00:11:28.568 --rc geninfo_unexecuted_blocks=1 00:11:28.568 00:11:28.568 ' 00:11:28.568 11:55:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:28.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.568 --rc genhtml_branch_coverage=1 00:11:28.568 --rc genhtml_function_coverage=1 00:11:28.568 --rc genhtml_legend=1 00:11:28.568 --rc geninfo_all_blocks=1 00:11:28.568 --rc geninfo_unexecuted_blocks=1 00:11:28.568 00:11:28.568 ' 00:11:28.568 11:55:33 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:28.568 11:55:33 -- app/cmdline.sh@17 -- # spdk_tgt_pid=69351 00:11:28.568 11:55:33 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:28.568 11:55:33 -- app/cmdline.sh@18 -- # waitforlisten 69351 00:11:28.568 11:55:33 -- common/autotest_common.sh@829 -- # '[' -z 69351 ']' 00:11:28.568 11:55:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.568 11:55:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.568 11:55:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.568 11:55:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.568 11:55:33 -- common/autotest_common.sh@10 -- # set +x 00:11:28.568 [2024-11-29 11:55:34.045782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:28.568 [2024-11-29 11:55:34.046241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69351 ] 00:11:28.833 [2024-11-29 11:55:34.223140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.833 [2024-11-29 11:55:34.331690] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:28.833 [2024-11-29 11:55:34.332163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.769 11:55:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.769 11:55:35 -- common/autotest_common.sh@862 -- # return 0 00:11:29.769 11:55:35 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:30.028 { 00:11:30.028 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:11:30.028 "fields": { 00:11:30.028 "major": 24, 00:11:30.028 "minor": 1, 00:11:30.028 "patch": 1, 00:11:30.028 "suffix": "-pre", 00:11:30.028 "commit": "c13c99a5e" 00:11:30.028 } 00:11:30.028 } 00:11:30.028 11:55:35 -- app/cmdline.sh@22 -- # expected_methods=() 00:11:30.028 11:55:35 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:30.028 11:55:35 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:30.028 11:55:35 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:30.028 11:55:35 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:30.028 11:55:35 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:30.028 11:55:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.028 11:55:35 -- common/autotest_common.sh@10 -- # set +x 00:11:30.028 11:55:35 -- app/cmdline.sh@26 -- # sort 00:11:30.028 11:55:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.028 11:55:35 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:30.028 11:55:35 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:30.028 11:55:35 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:30.028 11:55:35 -- common/autotest_common.sh@650 -- # local es=0 00:11:30.028 11:55:35 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:30.028 11:55:35 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:30.028 11:55:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:30.028 11:55:35 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:30.028 11:55:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:30.028 11:55:35 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:30.028 11:55:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:30.028 11:55:35 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:30.028 11:55:35 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:30.028 11:55:35 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:30.287 request: 00:11:30.287 { 00:11:30.287 "method": "env_dpdk_get_mem_stats", 00:11:30.287 "req_id": 1 00:11:30.287 } 00:11:30.287 Got JSON-RPC error response 00:11:30.287 response: 00:11:30.287 { 00:11:30.287 "code": -32601, 00:11:30.287 "message": "Method not found" 00:11:30.287 } 00:11:30.287 11:55:35 -- common/autotest_common.sh@653 -- # es=1 00:11:30.287 11:55:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:30.287 11:55:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:30.287 11:55:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:30.287 11:55:35 -- app/cmdline.sh@1 -- # killprocess 69351 00:11:30.287 11:55:35 -- common/autotest_common.sh@936 -- # '[' -z 69351 ']' 00:11:30.287 11:55:35 -- common/autotest_common.sh@940 -- # kill -0 69351 00:11:30.287 11:55:35 -- common/autotest_common.sh@941 -- # uname 00:11:30.287 11:55:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:30.287 11:55:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69351 00:11:30.287 killing process with pid 69351 00:11:30.287 11:55:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:30.287 11:55:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:30.287 11:55:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69351' 00:11:30.287 11:55:35 -- common/autotest_common.sh@955 -- # kill 69351 00:11:30.287 11:55:35 -- common/autotest_common.sh@960 -- # wait 69351 00:11:30.869 00:11:30.869 real 0m2.480s 00:11:30.869 user 0m3.088s 00:11:30.869 sys 0m0.517s 00:11:30.869 ************************************ 00:11:30.869 END TEST app_cmdline 00:11:30.869 ************************************ 00:11:30.869 11:55:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:30.869 11:55:36 -- common/autotest_common.sh@10 -- # set +x 00:11:30.869 11:55:36 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:30.869 11:55:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:30.869 11:55:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:30.869 11:55:36 -- common/autotest_common.sh@10 -- # set +x 00:11:30.869 ************************************ 00:11:30.869 START TEST version 00:11:30.869 ************************************ 00:11:30.869 11:55:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:31.128 * Looking for test storage... 00:11:31.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:31.128 11:55:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:31.128 11:55:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:31.128 11:55:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:31.128 11:55:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:31.128 11:55:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:31.128 11:55:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:31.128 11:55:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:31.128 11:55:36 -- scripts/common.sh@335 -- # IFS=.-: 00:11:31.128 11:55:36 -- scripts/common.sh@335 -- # read -ra ver1 00:11:31.128 11:55:36 -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.128 11:55:36 -- scripts/common.sh@336 -- # read -ra ver2 00:11:31.128 11:55:36 -- scripts/common.sh@337 -- # local 'op=<' 00:11:31.128 11:55:36 -- scripts/common.sh@339 -- # ver1_l=2 00:11:31.128 11:55:36 -- scripts/common.sh@340 -- # ver2_l=1 00:11:31.128 11:55:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:31.128 11:55:36 -- scripts/common.sh@343 -- # case "$op" in 00:11:31.128 11:55:36 -- scripts/common.sh@344 -- # : 1 00:11:31.128 11:55:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:31.128 11:55:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.128 11:55:36 -- scripts/common.sh@364 -- # decimal 1 00:11:31.128 11:55:36 -- scripts/common.sh@352 -- # local d=1 00:11:31.128 11:55:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.128 11:55:36 -- scripts/common.sh@354 -- # echo 1 00:11:31.128 11:55:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:31.128 11:55:36 -- scripts/common.sh@365 -- # decimal 2 00:11:31.128 11:55:36 -- scripts/common.sh@352 -- # local d=2 00:11:31.128 11:55:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.128 11:55:36 -- scripts/common.sh@354 -- # echo 2 00:11:31.128 11:55:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:31.128 11:55:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:31.128 11:55:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:31.128 11:55:36 -- scripts/common.sh@367 -- # return 0 00:11:31.128 11:55:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.128 11:55:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:31.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.128 --rc genhtml_branch_coverage=1 00:11:31.128 --rc genhtml_function_coverage=1 00:11:31.128 --rc genhtml_legend=1 00:11:31.128 --rc geninfo_all_blocks=1 00:11:31.128 --rc geninfo_unexecuted_blocks=1 00:11:31.128 00:11:31.128 ' 00:11:31.128 11:55:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:31.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.128 --rc genhtml_branch_coverage=1 00:11:31.128 --rc genhtml_function_coverage=1 00:11:31.128 --rc genhtml_legend=1 00:11:31.128 --rc geninfo_all_blocks=1 00:11:31.128 --rc geninfo_unexecuted_blocks=1 00:11:31.128 00:11:31.128 ' 00:11:31.128 11:55:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:31.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.129 --rc genhtml_branch_coverage=1 00:11:31.129 --rc genhtml_function_coverage=1 00:11:31.129 --rc genhtml_legend=1 00:11:31.129 --rc geninfo_all_blocks=1 00:11:31.129 --rc geninfo_unexecuted_blocks=1 00:11:31.129 00:11:31.129 ' 00:11:31.129 11:55:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:31.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.129 --rc genhtml_branch_coverage=1 00:11:31.129 --rc genhtml_function_coverage=1 00:11:31.129 --rc genhtml_legend=1 00:11:31.129 --rc geninfo_all_blocks=1 00:11:31.129 --rc geninfo_unexecuted_blocks=1 00:11:31.129 00:11:31.129 ' 00:11:31.129 11:55:36 -- app/version.sh@17 -- # get_header_version major 00:11:31.129 11:55:36 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:31.129 11:55:36 -- app/version.sh@14 -- # cut -f2 00:11:31.129 11:55:36 -- app/version.sh@14 -- # tr -d '"' 00:11:31.129 11:55:36 -- app/version.sh@17 -- # major=24 00:11:31.129 11:55:36 -- app/version.sh@18 -- # get_header_version minor 00:11:31.129 11:55:36 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:31.129 11:55:36 -- app/version.sh@14 -- # cut -f2 00:11:31.129 11:55:36 -- app/version.sh@14 -- # tr -d '"' 00:11:31.129 11:55:36 -- app/version.sh@18 -- # minor=1 00:11:31.129 11:55:36 -- app/version.sh@19 -- # get_header_version patch 00:11:31.129 11:55:36 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:31.129 11:55:36 -- app/version.sh@14 -- # cut -f2 00:11:31.129 11:55:36 -- app/version.sh@14 -- # tr -d '"' 00:11:31.129 11:55:36 -- app/version.sh@19 -- # patch=1 00:11:31.129 11:55:36 -- app/version.sh@20 -- # get_header_version suffix 00:11:31.129 11:55:36 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:31.129 11:55:36 -- app/version.sh@14 -- # cut -f2 00:11:31.129 11:55:36 -- app/version.sh@14 -- # tr -d '"' 00:11:31.129 11:55:36 -- app/version.sh@20 -- # suffix=-pre 00:11:31.129 11:55:36 -- app/version.sh@22 -- # version=24.1 00:11:31.129 11:55:36 -- app/version.sh@25 -- # (( patch != 0 )) 00:11:31.129 11:55:36 -- app/version.sh@25 -- # version=24.1.1 00:11:31.129 11:55:36 -- app/version.sh@28 -- # version=24.1.1rc0 00:11:31.129 11:55:36 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:31.129 11:55:36 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:31.129 11:55:36 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:11:31.129 11:55:36 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:11:31.129 00:11:31.129 real 0m0.258s 00:11:31.129 user 0m0.157s 00:11:31.129 sys 0m0.141s 00:11:31.129 11:55:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:31.129 11:55:36 -- common/autotest_common.sh@10 -- # set +x 00:11:31.129 ************************************ 00:11:31.129 END TEST version 00:11:31.129 ************************************ 00:11:31.129 11:55:36 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:11:31.129 11:55:36 -- spdk/autotest.sh@191 -- # uname -s 00:11:31.129 11:55:36 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:11:31.129 11:55:36 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:11:31.129 11:55:36 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:11:31.129 11:55:36 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:11:31.129 11:55:36 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:11:31.129 11:55:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:31.129 11:55:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:31.129 11:55:36 -- common/autotest_common.sh@10 -- # set +x 00:11:31.389 ************************************ 00:11:31.389 START TEST spdk_dd 00:11:31.389 ************************************ 00:11:31.389 11:55:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:11:31.389 * Looking for test storage... 00:11:31.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:31.389 11:55:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:31.389 11:55:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:31.389 11:55:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:31.389 11:55:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:31.389 11:55:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:31.389 11:55:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:31.389 11:55:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:31.389 11:55:36 -- scripts/common.sh@335 -- # IFS=.-: 00:11:31.389 11:55:36 -- scripts/common.sh@335 -- # read -ra ver1 00:11:31.389 11:55:36 -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.389 11:55:36 -- scripts/common.sh@336 -- # read -ra ver2 00:11:31.389 11:55:36 -- scripts/common.sh@337 -- # local 'op=<' 00:11:31.389 11:55:36 -- scripts/common.sh@339 -- # ver1_l=2 00:11:31.389 11:55:36 -- scripts/common.sh@340 -- # ver2_l=1 00:11:31.389 11:55:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:31.389 11:55:36 -- scripts/common.sh@343 -- # case "$op" in 00:11:31.389 11:55:36 -- scripts/common.sh@344 -- # : 1 00:11:31.389 11:55:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:31.389 11:55:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.389 11:55:36 -- scripts/common.sh@364 -- # decimal 1 00:11:31.389 11:55:36 -- scripts/common.sh@352 -- # local d=1 00:11:31.389 11:55:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.389 11:55:36 -- scripts/common.sh@354 -- # echo 1 00:11:31.389 11:55:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:31.389 11:55:36 -- scripts/common.sh@365 -- # decimal 2 00:11:31.389 11:55:36 -- scripts/common.sh@352 -- # local d=2 00:11:31.389 11:55:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.389 11:55:36 -- scripts/common.sh@354 -- # echo 2 00:11:31.389 11:55:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:31.389 11:55:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:31.389 11:55:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:31.389 11:55:36 -- scripts/common.sh@367 -- # return 0 00:11:31.389 11:55:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.389 11:55:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:31.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.389 --rc genhtml_branch_coverage=1 00:11:31.389 --rc genhtml_function_coverage=1 00:11:31.389 --rc genhtml_legend=1 00:11:31.389 --rc geninfo_all_blocks=1 00:11:31.389 --rc geninfo_unexecuted_blocks=1 00:11:31.389 00:11:31.389 ' 00:11:31.389 11:55:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:31.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.389 --rc genhtml_branch_coverage=1 00:11:31.389 --rc genhtml_function_coverage=1 00:11:31.389 --rc genhtml_legend=1 00:11:31.389 --rc geninfo_all_blocks=1 00:11:31.389 --rc geninfo_unexecuted_blocks=1 00:11:31.389 00:11:31.389 ' 00:11:31.389 11:55:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:31.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.389 --rc genhtml_branch_coverage=1 00:11:31.389 --rc genhtml_function_coverage=1 00:11:31.389 --rc genhtml_legend=1 00:11:31.389 --rc geninfo_all_blocks=1 00:11:31.389 --rc geninfo_unexecuted_blocks=1 00:11:31.389 00:11:31.389 ' 00:11:31.389 11:55:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:31.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.389 --rc genhtml_branch_coverage=1 00:11:31.389 --rc genhtml_function_coverage=1 00:11:31.389 --rc genhtml_legend=1 00:11:31.389 --rc geninfo_all_blocks=1 00:11:31.389 --rc geninfo_unexecuted_blocks=1 00:11:31.389 00:11:31.389 ' 00:11:31.389 11:55:36 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:31.389 11:55:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.389 11:55:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.389 11:55:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.389 11:55:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.389 11:55:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.389 11:55:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.389 11:55:36 -- paths/export.sh@5 -- # export PATH 00:11:31.389 11:55:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.389 11:55:36 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:31.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:31.958 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:31.958 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:31.958 11:55:37 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:11:31.958 11:55:37 -- dd/dd.sh@11 -- # nvme_in_userspace 00:11:31.958 11:55:37 -- scripts/common.sh@311 -- # local bdf bdfs 00:11:31.958 11:55:37 -- scripts/common.sh@312 -- # local nvmes 00:11:31.958 11:55:37 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:11:31.958 11:55:37 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:31.958 11:55:37 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:11:31.959 11:55:37 -- scripts/common.sh@297 -- # local bdf= 00:11:31.959 11:55:37 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:11:31.959 11:55:37 -- scripts/common.sh@232 -- # local class 00:11:31.959 11:55:37 -- scripts/common.sh@233 -- # local subclass 00:11:31.959 11:55:37 -- scripts/common.sh@234 -- # local progif 00:11:31.959 11:55:37 -- scripts/common.sh@235 -- # printf %02x 1 00:11:31.959 11:55:37 -- scripts/common.sh@235 -- # class=01 00:11:31.959 11:55:37 -- scripts/common.sh@236 -- # printf %02x 8 00:11:31.959 11:55:37 -- scripts/common.sh@236 -- # subclass=08 00:11:31.959 11:55:37 -- scripts/common.sh@237 -- # printf %02x 2 00:11:31.959 11:55:37 -- scripts/common.sh@237 -- # progif=02 00:11:31.959 11:55:37 -- scripts/common.sh@239 -- # hash lspci 00:11:31.959 11:55:37 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:11:31.959 11:55:37 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:11:31.959 11:55:37 -- scripts/common.sh@242 -- # grep -i -- -p02 00:11:31.959 11:55:37 -- scripts/common.sh@244 -- # tr -d '"' 00:11:31.959 11:55:37 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:31.959 11:55:37 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:31.959 11:55:37 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:11:31.959 11:55:37 -- scripts/common.sh@15 -- # local i 00:11:31.959 11:55:37 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:31.959 11:55:37 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:31.959 11:55:37 -- scripts/common.sh@24 -- # return 0 00:11:31.959 11:55:37 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:11:31.959 11:55:37 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:31.959 11:55:37 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:11:31.959 11:55:37 -- scripts/common.sh@15 -- # local i 00:11:31.959 11:55:37 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:31.959 11:55:37 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:31.959 11:55:37 -- scripts/common.sh@24 -- # return 0 00:11:31.959 11:55:37 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:11:31.959 11:55:37 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:11:31.959 11:55:37 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:11:31.959 11:55:37 -- scripts/common.sh@322 -- # uname -s 00:11:31.959 11:55:37 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:11:31.959 11:55:37 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:11:31.959 11:55:37 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:11:31.959 11:55:37 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:11:31.959 11:55:37 -- scripts/common.sh@322 -- # uname -s 00:11:31.959 11:55:37 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:11:31.959 11:55:37 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:11:31.959 11:55:37 -- scripts/common.sh@327 -- # (( 2 )) 00:11:31.959 11:55:37 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:11:31.959 11:55:37 -- dd/dd.sh@13 -- # check_liburing 00:11:31.959 11:55:37 -- dd/common.sh@139 -- # local lib so 00:11:31.959 11:55:37 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:11:31.959 11:55:37 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.959 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:11:31.959 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:31.960 11:55:37 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:11:31.960 11:55:37 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:11:31.960 * spdk_dd linked to liburing 00:11:31.960 11:55:37 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:31.960 11:55:37 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:31.960 11:55:37 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:31.960 11:55:37 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:31.960 11:55:37 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:31.960 11:55:37 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:31.960 11:55:37 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:11:31.960 11:55:37 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:31.960 11:55:37 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:31.960 11:55:37 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:31.960 11:55:37 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:31.960 11:55:37 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:31.960 11:55:37 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:31.960 11:55:37 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:31.960 11:55:37 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:31.960 11:55:37 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:31.960 11:55:37 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:31.960 11:55:37 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:31.960 11:55:37 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:31.960 11:55:37 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:31.960 11:55:37 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:31.960 11:55:37 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:31.960 11:55:37 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:31.961 11:55:37 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:31.961 11:55:37 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:31.961 11:55:37 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:31.961 11:55:37 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:31.961 11:55:37 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:31.961 11:55:37 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:31.961 11:55:37 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:31.961 11:55:37 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:31.961 11:55:37 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:31.961 11:55:37 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:31.961 11:55:37 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:31.961 11:55:37 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:31.961 11:55:37 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:31.961 11:55:37 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:31.961 11:55:37 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:11:31.961 11:55:37 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:31.961 11:55:37 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:31.961 11:55:37 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:31.961 11:55:37 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:31.961 11:55:37 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:11:31.961 11:55:37 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:31.961 11:55:37 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:31.961 11:55:37 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:31.961 11:55:37 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:31.961 11:55:37 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:11:31.961 11:55:37 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:11:31.961 11:55:37 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:31.961 11:55:37 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:11:31.961 11:55:37 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:11:31.961 11:55:37 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:11:31.961 11:55:37 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:11:31.961 11:55:37 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:11:31.961 11:55:37 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:11:31.961 11:55:37 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:11:31.961 11:55:37 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:11:31.961 11:55:37 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:11:31.961 11:55:37 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:11:31.961 11:55:37 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:11:31.961 11:55:37 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:11:31.961 11:55:37 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:31.961 11:55:37 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:11:31.961 11:55:37 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:11:31.961 11:55:37 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:11:31.961 11:55:37 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:11:31.961 11:55:37 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:31.961 11:55:37 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:11:31.961 11:55:37 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:11:31.961 11:55:37 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:11:31.961 11:55:37 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:11:31.961 11:55:37 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:11:31.961 11:55:37 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:11:31.961 11:55:37 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:11:31.961 11:55:37 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:11:31.961 11:55:37 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:11:31.961 11:55:37 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:11:31.961 11:55:37 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:31.961 11:55:37 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:11:31.961 11:55:37 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:11:31.961 11:55:37 -- dd/common.sh@149 -- # [[ y != y ]] 00:11:31.961 11:55:37 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:11:31.961 11:55:37 -- dd/common.sh@156 -- # export liburing_in_use=1 00:11:31.961 11:55:37 -- dd/common.sh@156 -- # liburing_in_use=1 00:11:31.961 11:55:37 -- dd/common.sh@157 -- # return 0 00:11:31.961 11:55:37 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:11:31.961 11:55:37 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:11:31.961 11:55:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:31.961 11:55:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:31.961 11:55:37 -- common/autotest_common.sh@10 -- # set +x 00:11:31.961 ************************************ 00:11:31.961 START TEST spdk_dd_basic_rw 00:11:31.961 ************************************ 00:11:31.961 11:55:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:11:31.961 * Looking for test storage... 00:11:31.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:31.961 11:55:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:31.961 11:55:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:31.961 11:55:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:32.220 11:55:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:32.220 11:55:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:32.220 11:55:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:32.220 11:55:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:32.220 11:55:37 -- scripts/common.sh@335 -- # IFS=.-: 00:11:32.220 11:55:37 -- scripts/common.sh@335 -- # read -ra ver1 00:11:32.220 11:55:37 -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.220 11:55:37 -- scripts/common.sh@336 -- # read -ra ver2 00:11:32.220 11:55:37 -- scripts/common.sh@337 -- # local 'op=<' 00:11:32.220 11:55:37 -- scripts/common.sh@339 -- # ver1_l=2 00:11:32.220 11:55:37 -- scripts/common.sh@340 -- # ver2_l=1 00:11:32.220 11:55:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:32.220 11:55:37 -- scripts/common.sh@343 -- # case "$op" in 00:11:32.220 11:55:37 -- scripts/common.sh@344 -- # : 1 00:11:32.220 11:55:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:32.220 11:55:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.220 11:55:37 -- scripts/common.sh@364 -- # decimal 1 00:11:32.220 11:55:37 -- scripts/common.sh@352 -- # local d=1 00:11:32.220 11:55:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.220 11:55:37 -- scripts/common.sh@354 -- # echo 1 00:11:32.220 11:55:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:32.220 11:55:37 -- scripts/common.sh@365 -- # decimal 2 00:11:32.220 11:55:37 -- scripts/common.sh@352 -- # local d=2 00:11:32.220 11:55:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.220 11:55:37 -- scripts/common.sh@354 -- # echo 2 00:11:32.220 11:55:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:32.220 11:55:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:32.220 11:55:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:32.220 11:55:37 -- scripts/common.sh@367 -- # return 0 00:11:32.220 11:55:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.220 11:55:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:32.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.220 --rc genhtml_branch_coverage=1 00:11:32.221 --rc genhtml_function_coverage=1 00:11:32.221 --rc genhtml_legend=1 00:11:32.221 --rc geninfo_all_blocks=1 00:11:32.221 --rc geninfo_unexecuted_blocks=1 00:11:32.221 00:11:32.221 ' 00:11:32.221 11:55:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:32.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.221 --rc genhtml_branch_coverage=1 00:11:32.221 --rc genhtml_function_coverage=1 00:11:32.221 --rc genhtml_legend=1 00:11:32.221 --rc geninfo_all_blocks=1 00:11:32.221 --rc geninfo_unexecuted_blocks=1 00:11:32.221 00:11:32.221 ' 00:11:32.221 11:55:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:32.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.221 --rc genhtml_branch_coverage=1 00:11:32.221 --rc genhtml_function_coverage=1 00:11:32.221 --rc genhtml_legend=1 00:11:32.221 --rc geninfo_all_blocks=1 00:11:32.221 --rc geninfo_unexecuted_blocks=1 00:11:32.221 00:11:32.221 ' 00:11:32.221 11:55:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:32.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.221 --rc genhtml_branch_coverage=1 00:11:32.221 --rc genhtml_function_coverage=1 00:11:32.221 --rc genhtml_legend=1 00:11:32.221 --rc geninfo_all_blocks=1 00:11:32.221 --rc geninfo_unexecuted_blocks=1 00:11:32.221 00:11:32.221 ' 00:11:32.221 11:55:37 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:32.221 11:55:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.221 11:55:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.221 11:55:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.221 11:55:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.221 11:55:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.221 11:55:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.221 11:55:37 -- paths/export.sh@5 -- # export PATH 00:11:32.221 11:55:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.221 11:55:37 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:11:32.221 11:55:37 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:11:32.221 11:55:37 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:11:32.221 11:55:37 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:11:32.221 11:55:37 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:11:32.221 11:55:37 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:11:32.221 11:55:37 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:11:32.221 11:55:37 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:32.221 11:55:37 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:32.221 11:55:37 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:11:32.221 11:55:37 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:11:32.221 11:55:37 -- dd/common.sh@126 -- # mapfile -t id 00:11:32.221 11:55:37 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:11:32.482 11:55:37 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2191 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:11:32.482 11:55:37 -- dd/common.sh@130 -- # lbaf=04 00:11:32.482 11:55:37 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2191 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:11:32.482 11:55:37 -- dd/common.sh@132 -- # lbaf=4096 00:11:32.482 11:55:37 -- dd/common.sh@134 -- # echo 4096 00:11:32.482 11:55:37 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:11:32.482 11:55:37 -- dd/basic_rw.sh@96 -- # : 00:11:32.482 11:55:37 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:32.482 11:55:37 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:11:32.482 11:55:37 -- dd/basic_rw.sh@96 -- # gen_conf 00:11:32.482 11:55:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:32.482 11:55:37 -- dd/common.sh@31 -- # xtrace_disable 00:11:32.482 11:55:37 -- common/autotest_common.sh@10 -- # set +x 00:11:32.482 11:55:37 -- common/autotest_common.sh@10 -- # set +x 00:11:32.482 ************************************ 00:11:32.482 START TEST dd_bs_lt_native_bs 00:11:32.482 ************************************ 00:11:32.483 11:55:37 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:32.483 11:55:37 -- common/autotest_common.sh@650 -- # local es=0 00:11:32.483 11:55:37 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:32.483 11:55:37 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.483 11:55:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.483 11:55:37 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.483 11:55:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.483 11:55:37 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.483 11:55:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.483 11:55:37 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.483 11:55:37 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:32.483 11:55:37 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:32.483 { 00:11:32.483 "subsystems": [ 00:11:32.483 { 00:11:32.483 "subsystem": "bdev", 00:11:32.483 "config": [ 00:11:32.483 { 00:11:32.483 "params": { 00:11:32.483 "trtype": "pcie", 00:11:32.483 "traddr": "0000:00:06.0", 00:11:32.483 "name": "Nvme0" 00:11:32.483 }, 00:11:32.483 "method": "bdev_nvme_attach_controller" 00:11:32.483 }, 00:11:32.483 { 00:11:32.483 "method": "bdev_wait_for_examine" 00:11:32.483 } 00:11:32.483 ] 00:11:32.483 } 00:11:32.483 ] 00:11:32.483 } 00:11:32.483 [2024-11-29 11:55:37.800823] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:32.483 [2024-11-29 11:55:37.800982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69701 ] 00:11:32.483 [2024-11-29 11:55:37.942254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.741 [2024-11-29 11:55:38.089738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.000 [2024-11-29 11:55:38.287639] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:11:33.000 [2024-11-29 11:55:38.287733] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:33.000 [2024-11-29 11:55:38.464274] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:11:33.260 11:55:38 -- common/autotest_common.sh@653 -- # es=234 00:11:33.260 11:55:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:33.260 11:55:38 -- common/autotest_common.sh@662 -- # es=106 00:11:33.260 11:55:38 -- common/autotest_common.sh@663 -- # case "$es" in 00:11:33.260 11:55:38 -- common/autotest_common.sh@670 -- # es=1 00:11:33.260 11:55:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:33.260 00:11:33.260 real 0m0.843s 00:11:33.260 user 0m0.586s 00:11:33.260 sys 0m0.212s 00:11:33.260 11:55:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:33.260 11:55:38 -- common/autotest_common.sh@10 -- # set +x 00:11:33.260 ************************************ 00:11:33.260 END TEST dd_bs_lt_native_bs 00:11:33.260 ************************************ 00:11:33.260 11:55:38 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:11:33.260 11:55:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:33.260 11:55:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.260 11:55:38 -- common/autotest_common.sh@10 -- # set +x 00:11:33.260 ************************************ 00:11:33.260 START TEST dd_rw 00:11:33.260 ************************************ 00:11:33.260 11:55:38 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:11:33.260 11:55:38 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:11:33.260 11:55:38 -- dd/basic_rw.sh@12 -- # local count size 00:11:33.260 11:55:38 -- dd/basic_rw.sh@13 -- # local qds bss 00:11:33.260 11:55:38 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:11:33.260 11:55:38 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:11:33.260 11:55:38 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:11:33.260 11:55:38 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:11:33.260 11:55:38 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:11:33.260 11:55:38 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:11:33.260 11:55:38 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:11:33.260 11:55:38 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:11:33.260 11:55:38 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:33.260 11:55:38 -- dd/basic_rw.sh@23 -- # count=15 00:11:33.260 11:55:38 -- dd/basic_rw.sh@24 -- # count=15 00:11:33.260 11:55:38 -- dd/basic_rw.sh@25 -- # size=61440 00:11:33.260 11:55:38 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:11:33.260 11:55:38 -- dd/common.sh@98 -- # xtrace_disable 00:11:33.260 11:55:38 -- common/autotest_common.sh@10 -- # set +x 00:11:33.828 11:55:39 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:11:33.828 11:55:39 -- dd/basic_rw.sh@30 -- # gen_conf 00:11:33.828 11:55:39 -- dd/common.sh@31 -- # xtrace_disable 00:11:33.828 11:55:39 -- common/autotest_common.sh@10 -- # set +x 00:11:34.087 [2024-11-29 11:55:39.346991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:34.087 [2024-11-29 11:55:39.347118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69737 ] 00:11:34.087 { 00:11:34.087 "subsystems": [ 00:11:34.087 { 00:11:34.087 "subsystem": "bdev", 00:11:34.087 "config": [ 00:11:34.087 { 00:11:34.087 "params": { 00:11:34.087 "trtype": "pcie", 00:11:34.087 "traddr": "0000:00:06.0", 00:11:34.087 "name": "Nvme0" 00:11:34.087 }, 00:11:34.087 "method": "bdev_nvme_attach_controller" 00:11:34.087 }, 00:11:34.087 { 00:11:34.087 "method": "bdev_wait_for_examine" 00:11:34.087 } 00:11:34.087 ] 00:11:34.087 } 00:11:34.087 ] 00:11:34.087 } 00:11:34.087 [2024-11-29 11:55:39.480172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.346 [2024-11-29 11:55:39.611193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.346  [2024-11-29T11:55:40.424Z] Copying: 60/60 [kB] (average 29 MBps) 00:11:34.913 00:11:34.913 11:55:40 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:11:34.913 11:55:40 -- dd/basic_rw.sh@37 -- # gen_conf 00:11:34.913 11:55:40 -- dd/common.sh@31 -- # xtrace_disable 00:11:34.913 11:55:40 -- common/autotest_common.sh@10 -- # set +x 00:11:34.913 [2024-11-29 11:55:40.207332] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:34.913 [2024-11-29 11:55:40.207495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69750 ] 00:11:34.913 { 00:11:34.913 "subsystems": [ 00:11:34.913 { 00:11:34.913 "subsystem": "bdev", 00:11:34.913 "config": [ 00:11:34.913 { 00:11:34.913 "params": { 00:11:34.913 "trtype": "pcie", 00:11:34.913 "traddr": "0000:00:06.0", 00:11:34.913 "name": "Nvme0" 00:11:34.913 }, 00:11:34.913 "method": "bdev_nvme_attach_controller" 00:11:34.913 }, 00:11:34.913 { 00:11:34.913 "method": "bdev_wait_for_examine" 00:11:34.913 } 00:11:34.913 ] 00:11:34.913 } 00:11:34.913 ] 00:11:34.913 } 00:11:34.913 [2024-11-29 11:55:40.343217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.172 [2024-11-29 11:55:40.472727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.172  [2024-11-29T11:55:41.251Z] Copying: 60/60 [kB] (average 29 MBps) 00:11:35.740 00:11:35.740 11:55:41 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:35.740 11:55:41 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:11:35.740 11:55:41 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:35.740 11:55:41 -- dd/common.sh@11 -- # local nvme_ref= 00:11:35.740 11:55:41 -- dd/common.sh@12 -- # local size=61440 00:11:35.740 11:55:41 -- dd/common.sh@14 -- # local bs=1048576 00:11:35.740 11:55:41 -- dd/common.sh@15 -- # local count=1 00:11:35.740 11:55:41 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:35.740 11:55:41 -- dd/common.sh@18 -- # gen_conf 00:11:35.740 11:55:41 -- dd/common.sh@31 -- # xtrace_disable 00:11:35.740 11:55:41 -- common/autotest_common.sh@10 -- # set +x 00:11:35.740 [2024-11-29 11:55:41.080075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:35.740 [2024-11-29 11:55:41.080214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69769 ] 00:11:35.740 { 00:11:35.740 "subsystems": [ 00:11:35.740 { 00:11:35.740 "subsystem": "bdev", 00:11:35.740 "config": [ 00:11:35.740 { 00:11:35.740 "params": { 00:11:35.740 "trtype": "pcie", 00:11:35.740 "traddr": "0000:00:06.0", 00:11:35.740 "name": "Nvme0" 00:11:35.740 }, 00:11:35.740 "method": "bdev_nvme_attach_controller" 00:11:35.740 }, 00:11:35.740 { 00:11:35.740 "method": "bdev_wait_for_examine" 00:11:35.740 } 00:11:35.740 ] 00:11:35.740 } 00:11:35.740 ] 00:11:35.740 } 00:11:35.740 [2024-11-29 11:55:41.216137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.999 [2024-11-29 11:55:41.349673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.258  [2024-11-29T11:55:42.028Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:11:36.517 00:11:36.517 11:55:41 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:36.517 11:55:41 -- dd/basic_rw.sh@23 -- # count=15 00:11:36.517 11:55:41 -- dd/basic_rw.sh@24 -- # count=15 00:11:36.517 11:55:41 -- dd/basic_rw.sh@25 -- # size=61440 00:11:36.517 11:55:41 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:11:36.517 11:55:41 -- dd/common.sh@98 -- # xtrace_disable 00:11:36.517 11:55:41 -- common/autotest_common.sh@10 -- # set +x 00:11:37.091 11:55:42 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:11:37.091 11:55:42 -- dd/basic_rw.sh@30 -- # gen_conf 00:11:37.091 11:55:42 -- dd/common.sh@31 -- # xtrace_disable 00:11:37.091 11:55:42 -- common/autotest_common.sh@10 -- # set +x 00:11:37.091 [2024-11-29 11:55:42.563895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:37.091 [2024-11-29 11:55:42.564020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69798 ] 00:11:37.091 { 00:11:37.091 "subsystems": [ 00:11:37.091 { 00:11:37.091 "subsystem": "bdev", 00:11:37.091 "config": [ 00:11:37.091 { 00:11:37.091 "params": { 00:11:37.091 "trtype": "pcie", 00:11:37.091 "traddr": "0000:00:06.0", 00:11:37.091 "name": "Nvme0" 00:11:37.091 }, 00:11:37.091 "method": "bdev_nvme_attach_controller" 00:11:37.091 }, 00:11:37.091 { 00:11:37.091 "method": "bdev_wait_for_examine" 00:11:37.091 } 00:11:37.091 ] 00:11:37.091 } 00:11:37.091 ] 00:11:37.091 } 00:11:37.359 [2024-11-29 11:55:42.702905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.359 [2024-11-29 11:55:42.837133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.618  [2024-11-29T11:55:43.390Z] Copying: 60/60 [kB] (average 58 MBps) 00:11:37.879 00:11:37.879 11:55:43 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:11:37.879 11:55:43 -- dd/basic_rw.sh@37 -- # gen_conf 00:11:37.879 11:55:43 -- dd/common.sh@31 -- # xtrace_disable 00:11:37.879 11:55:43 -- common/autotest_common.sh@10 -- # set +x 00:11:38.138 [2024-11-29 11:55:43.436427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:38.138 { 00:11:38.138 "subsystems": [ 00:11:38.138 { 00:11:38.138 "subsystem": "bdev", 00:11:38.138 "config": [ 00:11:38.138 { 00:11:38.138 "params": { 00:11:38.138 "trtype": "pcie", 00:11:38.138 "traddr": "0000:00:06.0", 00:11:38.138 "name": "Nvme0" 00:11:38.138 }, 00:11:38.138 "method": "bdev_nvme_attach_controller" 00:11:38.138 }, 00:11:38.138 { 00:11:38.138 "method": "bdev_wait_for_examine" 00:11:38.138 } 00:11:38.138 ] 00:11:38.138 } 00:11:38.138 ] 00:11:38.138 } 00:11:38.138 [2024-11-29 11:55:43.436573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69805 ] 00:11:38.138 [2024-11-29 11:55:43.578384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.397 [2024-11-29 11:55:43.714056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.656  [2024-11-29T11:55:44.426Z] Copying: 60/60 [kB] (average 58 MBps) 00:11:38.915 00:11:38.915 11:55:44 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:38.915 11:55:44 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:11:38.915 11:55:44 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:38.915 11:55:44 -- dd/common.sh@11 -- # local nvme_ref= 00:11:38.915 11:55:44 -- dd/common.sh@12 -- # local size=61440 00:11:38.915 11:55:44 -- dd/common.sh@14 -- # local bs=1048576 00:11:38.915 11:55:44 -- dd/common.sh@15 -- # local count=1 00:11:38.915 11:55:44 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:38.915 11:55:44 -- dd/common.sh@18 -- # gen_conf 00:11:38.915 11:55:44 -- dd/common.sh@31 -- # xtrace_disable 00:11:38.915 11:55:44 -- common/autotest_common.sh@10 -- # set +x 00:11:38.915 [2024-11-29 11:55:44.325169] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:38.915 [2024-11-29 11:55:44.325304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69824 ] 00:11:38.915 { 00:11:38.915 "subsystems": [ 00:11:38.915 { 00:11:38.915 "subsystem": "bdev", 00:11:38.915 "config": [ 00:11:38.915 { 00:11:38.915 "params": { 00:11:38.915 "trtype": "pcie", 00:11:38.915 "traddr": "0000:00:06.0", 00:11:38.915 "name": "Nvme0" 00:11:38.915 }, 00:11:38.915 "method": "bdev_nvme_attach_controller" 00:11:38.915 }, 00:11:38.915 { 00:11:38.915 "method": "bdev_wait_for_examine" 00:11:38.915 } 00:11:38.915 ] 00:11:38.915 } 00:11:38.915 ] 00:11:38.915 } 00:11:39.174 [2024-11-29 11:55:44.465255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.174 [2024-11-29 11:55:44.603098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.432  [2024-11-29T11:55:45.203Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:39.692 00:11:39.692 11:55:45 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:11:39.692 11:55:45 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:39.692 11:55:45 -- dd/basic_rw.sh@23 -- # count=7 00:11:39.692 11:55:45 -- dd/basic_rw.sh@24 -- # count=7 00:11:39.692 11:55:45 -- dd/basic_rw.sh@25 -- # size=57344 00:11:39.692 11:55:45 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:11:39.692 11:55:45 -- dd/common.sh@98 -- # xtrace_disable 00:11:39.692 11:55:45 -- common/autotest_common.sh@10 -- # set +x 00:11:40.258 11:55:45 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:11:40.258 11:55:45 -- dd/basic_rw.sh@30 -- # gen_conf 00:11:40.258 11:55:45 -- dd/common.sh@31 -- # xtrace_disable 00:11:40.258 11:55:45 -- common/autotest_common.sh@10 -- # set +x 00:11:40.517 [2024-11-29 11:55:45.772198] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:40.517 [2024-11-29 11:55:45.772335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69853 ] 00:11:40.517 { 00:11:40.517 "subsystems": [ 00:11:40.517 { 00:11:40.517 "subsystem": "bdev", 00:11:40.517 "config": [ 00:11:40.517 { 00:11:40.517 "params": { 00:11:40.517 "trtype": "pcie", 00:11:40.517 "traddr": "0000:00:06.0", 00:11:40.517 "name": "Nvme0" 00:11:40.517 }, 00:11:40.517 "method": "bdev_nvme_attach_controller" 00:11:40.517 }, 00:11:40.517 { 00:11:40.517 "method": "bdev_wait_for_examine" 00:11:40.517 } 00:11:40.517 ] 00:11:40.517 } 00:11:40.517 ] 00:11:40.517 } 00:11:40.517 [2024-11-29 11:55:45.911292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.776 [2024-11-29 11:55:46.046780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.776  [2024-11-29T11:55:46.855Z] Copying: 56/56 [kB] (average 54 MBps) 00:11:41.344 00:11:41.344 11:55:46 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:11:41.344 11:55:46 -- dd/basic_rw.sh@37 -- # gen_conf 00:11:41.344 11:55:46 -- dd/common.sh@31 -- # xtrace_disable 00:11:41.344 11:55:46 -- common/autotest_common.sh@10 -- # set +x 00:11:41.344 [2024-11-29 11:55:46.647897] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:41.344 [2024-11-29 11:55:46.648020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69866 ] 00:11:41.344 { 00:11:41.344 "subsystems": [ 00:11:41.344 { 00:11:41.344 "subsystem": "bdev", 00:11:41.344 "config": [ 00:11:41.344 { 00:11:41.344 "params": { 00:11:41.344 "trtype": "pcie", 00:11:41.344 "traddr": "0000:00:06.0", 00:11:41.344 "name": "Nvme0" 00:11:41.344 }, 00:11:41.344 "method": "bdev_nvme_attach_controller" 00:11:41.344 }, 00:11:41.344 { 00:11:41.344 "method": "bdev_wait_for_examine" 00:11:41.344 } 00:11:41.344 ] 00:11:41.344 } 00:11:41.344 ] 00:11:41.344 } 00:11:41.344 [2024-11-29 11:55:46.780354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.604 [2024-11-29 11:55:46.919616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.862  [2024-11-29T11:55:47.632Z] Copying: 56/56 [kB] (average 27 MBps) 00:11:42.121 00:11:42.121 11:55:47 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:42.121 11:55:47 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:11:42.121 11:55:47 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:42.121 11:55:47 -- dd/common.sh@11 -- # local nvme_ref= 00:11:42.121 11:55:47 -- dd/common.sh@12 -- # local size=57344 00:11:42.121 11:55:47 -- dd/common.sh@14 -- # local bs=1048576 00:11:42.121 11:55:47 -- dd/common.sh@15 -- # local count=1 00:11:42.121 11:55:47 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:42.121 11:55:47 -- dd/common.sh@18 -- # gen_conf 00:11:42.121 11:55:47 -- dd/common.sh@31 -- # xtrace_disable 00:11:42.121 11:55:47 -- common/autotest_common.sh@10 -- # set +x 00:11:42.121 [2024-11-29 11:55:47.542830] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:42.121 [2024-11-29 11:55:47.542949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69879 ] 00:11:42.121 { 00:11:42.121 "subsystems": [ 00:11:42.121 { 00:11:42.121 "subsystem": "bdev", 00:11:42.121 "config": [ 00:11:42.121 { 00:11:42.121 "params": { 00:11:42.121 "trtype": "pcie", 00:11:42.121 "traddr": "0000:00:06.0", 00:11:42.121 "name": "Nvme0" 00:11:42.121 }, 00:11:42.121 "method": "bdev_nvme_attach_controller" 00:11:42.121 }, 00:11:42.121 { 00:11:42.121 "method": "bdev_wait_for_examine" 00:11:42.121 } 00:11:42.121 ] 00:11:42.121 } 00:11:42.121 ] 00:11:42.121 } 00:11:42.380 [2024-11-29 11:55:47.681871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.380 [2024-11-29 11:55:47.818884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.638  [2024-11-29T11:55:48.408Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:42.897 00:11:42.897 11:55:48 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:42.897 11:55:48 -- dd/basic_rw.sh@23 -- # count=7 00:11:42.897 11:55:48 -- dd/basic_rw.sh@24 -- # count=7 00:11:42.897 11:55:48 -- dd/basic_rw.sh@25 -- # size=57344 00:11:42.897 11:55:48 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:11:42.897 11:55:48 -- dd/common.sh@98 -- # xtrace_disable 00:11:42.897 11:55:48 -- common/autotest_common.sh@10 -- # set +x 00:11:43.465 11:55:48 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:11:43.465 11:55:48 -- dd/basic_rw.sh@30 -- # gen_conf 00:11:43.465 11:55:48 -- dd/common.sh@31 -- # xtrace_disable 00:11:43.465 11:55:48 -- common/autotest_common.sh@10 -- # set +x 00:11:43.465 [2024-11-29 11:55:48.970572] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:43.465 [2024-11-29 11:55:48.970729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69908 ] 00:11:43.724 { 00:11:43.724 "subsystems": [ 00:11:43.724 { 00:11:43.724 "subsystem": "bdev", 00:11:43.724 "config": [ 00:11:43.724 { 00:11:43.724 "params": { 00:11:43.724 "trtype": "pcie", 00:11:43.724 "traddr": "0000:00:06.0", 00:11:43.724 "name": "Nvme0" 00:11:43.724 }, 00:11:43.724 "method": "bdev_nvme_attach_controller" 00:11:43.724 }, 00:11:43.724 { 00:11:43.724 "method": "bdev_wait_for_examine" 00:11:43.724 } 00:11:43.724 ] 00:11:43.724 } 00:11:43.724 ] 00:11:43.724 } 00:11:43.724 [2024-11-29 11:55:49.109810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.983 [2024-11-29 11:55:49.243440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.983  [2024-11-29T11:55:50.078Z] Copying: 56/56 [kB] (average 54 MBps) 00:11:44.567 00:11:44.567 11:55:49 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:11:44.567 11:55:49 -- dd/basic_rw.sh@37 -- # gen_conf 00:11:44.567 11:55:49 -- dd/common.sh@31 -- # xtrace_disable 00:11:44.567 11:55:49 -- common/autotest_common.sh@10 -- # set +x 00:11:44.567 [2024-11-29 11:55:49.851749] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:44.567 [2024-11-29 11:55:49.851871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69921 ] 00:11:44.567 { 00:11:44.567 "subsystems": [ 00:11:44.567 { 00:11:44.567 "subsystem": "bdev", 00:11:44.567 "config": [ 00:11:44.567 { 00:11:44.567 "params": { 00:11:44.567 "trtype": "pcie", 00:11:44.567 "traddr": "0000:00:06.0", 00:11:44.567 "name": "Nvme0" 00:11:44.567 }, 00:11:44.567 "method": "bdev_nvme_attach_controller" 00:11:44.567 }, 00:11:44.567 { 00:11:44.567 "method": "bdev_wait_for_examine" 00:11:44.567 } 00:11:44.567 ] 00:11:44.567 } 00:11:44.567 ] 00:11:44.567 } 00:11:44.567 [2024-11-29 11:55:49.990547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.826 [2024-11-29 11:55:50.125765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.826  [2024-11-29T11:55:50.906Z] Copying: 56/56 [kB] (average 54 MBps) 00:11:45.395 00:11:45.395 11:55:50 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:45.395 11:55:50 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:11:45.395 11:55:50 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:45.395 11:55:50 -- dd/common.sh@11 -- # local nvme_ref= 00:11:45.395 11:55:50 -- dd/common.sh@12 -- # local size=57344 00:11:45.395 11:55:50 -- dd/common.sh@14 -- # local bs=1048576 00:11:45.395 11:55:50 -- dd/common.sh@15 -- # local count=1 00:11:45.395 11:55:50 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:45.395 11:55:50 -- dd/common.sh@18 -- # gen_conf 00:11:45.395 11:55:50 -- dd/common.sh@31 -- # xtrace_disable 00:11:45.395 11:55:50 -- common/autotest_common.sh@10 -- # set +x 00:11:45.395 [2024-11-29 11:55:50.741994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:45.395 [2024-11-29 11:55:50.742107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69940 ] 00:11:45.395 { 00:11:45.395 "subsystems": [ 00:11:45.395 { 00:11:45.395 "subsystem": "bdev", 00:11:45.395 "config": [ 00:11:45.395 { 00:11:45.395 "params": { 00:11:45.395 "trtype": "pcie", 00:11:45.395 "traddr": "0000:00:06.0", 00:11:45.395 "name": "Nvme0" 00:11:45.395 }, 00:11:45.395 "method": "bdev_nvme_attach_controller" 00:11:45.395 }, 00:11:45.395 { 00:11:45.395 "method": "bdev_wait_for_examine" 00:11:45.395 } 00:11:45.395 ] 00:11:45.395 } 00:11:45.395 ] 00:11:45.395 } 00:11:45.395 [2024-11-29 11:55:50.876837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.654 [2024-11-29 11:55:51.011787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.912  [2024-11-29T11:55:51.682Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:11:46.171 00:11:46.171 11:55:51 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:11:46.171 11:55:51 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:46.171 11:55:51 -- dd/basic_rw.sh@23 -- # count=3 00:11:46.171 11:55:51 -- dd/basic_rw.sh@24 -- # count=3 00:11:46.171 11:55:51 -- dd/basic_rw.sh@25 -- # size=49152 00:11:46.171 11:55:51 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:11:46.171 11:55:51 -- dd/common.sh@98 -- # xtrace_disable 00:11:46.171 11:55:51 -- common/autotest_common.sh@10 -- # set +x 00:11:46.738 11:55:52 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:11:46.738 11:55:52 -- dd/basic_rw.sh@30 -- # gen_conf 00:11:46.738 11:55:52 -- dd/common.sh@31 -- # xtrace_disable 00:11:46.738 11:55:52 -- common/autotest_common.sh@10 -- # set +x 00:11:46.738 [2024-11-29 11:55:52.084574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:46.738 [2024-11-29 11:55:52.084706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69962 ] 00:11:46.738 { 00:11:46.738 "subsystems": [ 00:11:46.738 { 00:11:46.738 "subsystem": "bdev", 00:11:46.738 "config": [ 00:11:46.738 { 00:11:46.738 "params": { 00:11:46.738 "trtype": "pcie", 00:11:46.738 "traddr": "0000:00:06.0", 00:11:46.738 "name": "Nvme0" 00:11:46.738 }, 00:11:46.738 "method": "bdev_nvme_attach_controller" 00:11:46.738 }, 00:11:46.738 { 00:11:46.738 "method": "bdev_wait_for_examine" 00:11:46.738 } 00:11:46.738 ] 00:11:46.738 } 00:11:46.738 ] 00:11:46.738 } 00:11:46.738 [2024-11-29 11:55:52.217262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.997 [2024-11-29 11:55:52.355751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.255  [2024-11-29T11:55:53.025Z] Copying: 48/48 [kB] (average 46 MBps) 00:11:47.514 00:11:47.514 11:55:52 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:11:47.514 11:55:52 -- dd/basic_rw.sh@37 -- # gen_conf 00:11:47.514 11:55:52 -- dd/common.sh@31 -- # xtrace_disable 00:11:47.514 11:55:52 -- common/autotest_common.sh@10 -- # set +x 00:11:47.514 [2024-11-29 11:55:52.956982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:47.514 [2024-11-29 11:55:52.957141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69976 ] 00:11:47.514 { 00:11:47.514 "subsystems": [ 00:11:47.514 { 00:11:47.514 "subsystem": "bdev", 00:11:47.514 "config": [ 00:11:47.514 { 00:11:47.514 "params": { 00:11:47.514 "trtype": "pcie", 00:11:47.514 "traddr": "0000:00:06.0", 00:11:47.514 "name": "Nvme0" 00:11:47.514 }, 00:11:47.514 "method": "bdev_nvme_attach_controller" 00:11:47.514 }, 00:11:47.514 { 00:11:47.514 "method": "bdev_wait_for_examine" 00:11:47.514 } 00:11:47.514 ] 00:11:47.514 } 00:11:47.514 ] 00:11:47.514 } 00:11:47.772 [2024-11-29 11:55:53.096080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.772 [2024-11-29 11:55:53.219891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.029  [2024-11-29T11:55:53.798Z] Copying: 48/48 [kB] (average 46 MBps) 00:11:48.287 00:11:48.546 11:55:53 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:48.546 11:55:53 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:11:48.546 11:55:53 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:48.546 11:55:53 -- dd/common.sh@11 -- # local nvme_ref= 00:11:48.546 11:55:53 -- dd/common.sh@12 -- # local size=49152 00:11:48.546 11:55:53 -- dd/common.sh@14 -- # local bs=1048576 00:11:48.546 11:55:53 -- dd/common.sh@15 -- # local count=1 00:11:48.546 11:55:53 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:48.546 11:55:53 -- dd/common.sh@18 -- # gen_conf 00:11:48.546 11:55:53 -- dd/common.sh@31 -- # xtrace_disable 00:11:48.546 11:55:53 -- common/autotest_common.sh@10 -- # set +x 00:11:48.546 [2024-11-29 11:55:53.859913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:48.546 [2024-11-29 11:55:53.860069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69995 ] 00:11:48.546 { 00:11:48.546 "subsystems": [ 00:11:48.546 { 00:11:48.546 "subsystem": "bdev", 00:11:48.546 "config": [ 00:11:48.546 { 00:11:48.546 "params": { 00:11:48.546 "trtype": "pcie", 00:11:48.546 "traddr": "0000:00:06.0", 00:11:48.546 "name": "Nvme0" 00:11:48.546 }, 00:11:48.546 "method": "bdev_nvme_attach_controller" 00:11:48.546 }, 00:11:48.546 { 00:11:48.547 "method": "bdev_wait_for_examine" 00:11:48.547 } 00:11:48.547 ] 00:11:48.547 } 00:11:48.547 ] 00:11:48.547 } 00:11:48.547 [2024-11-29 11:55:53.999793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.806 [2024-11-29 11:55:54.133910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.065  [2024-11-29T11:55:54.836Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:49.325 00:11:49.325 11:55:54 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:49.325 11:55:54 -- dd/basic_rw.sh@23 -- # count=3 00:11:49.325 11:55:54 -- dd/basic_rw.sh@24 -- # count=3 00:11:49.325 11:55:54 -- dd/basic_rw.sh@25 -- # size=49152 00:11:49.325 11:55:54 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:11:49.325 11:55:54 -- dd/common.sh@98 -- # xtrace_disable 00:11:49.325 11:55:54 -- common/autotest_common.sh@10 -- # set +x 00:11:49.891 11:55:55 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:11:49.891 11:55:55 -- dd/basic_rw.sh@30 -- # gen_conf 00:11:49.891 11:55:55 -- dd/common.sh@31 -- # xtrace_disable 00:11:49.891 11:55:55 -- common/autotest_common.sh@10 -- # set +x 00:11:49.891 [2024-11-29 11:55:55.228458] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:49.892 [2024-11-29 11:55:55.229211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70013 ] 00:11:49.892 { 00:11:49.892 "subsystems": [ 00:11:49.892 { 00:11:49.892 "subsystem": "bdev", 00:11:49.892 "config": [ 00:11:49.892 { 00:11:49.892 "params": { 00:11:49.892 "trtype": "pcie", 00:11:49.892 "traddr": "0000:00:06.0", 00:11:49.892 "name": "Nvme0" 00:11:49.892 }, 00:11:49.892 "method": "bdev_nvme_attach_controller" 00:11:49.892 }, 00:11:49.892 { 00:11:49.892 "method": "bdev_wait_for_examine" 00:11:49.892 } 00:11:49.892 ] 00:11:49.892 } 00:11:49.892 ] 00:11:49.892 } 00:11:49.892 [2024-11-29 11:55:55.367899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.148 [2024-11-29 11:55:55.501278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.454  [2024-11-29T11:55:56.274Z] Copying: 48/48 [kB] (average 46 MBps) 00:11:50.763 00:11:50.763 11:55:56 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:11:50.763 11:55:56 -- dd/basic_rw.sh@37 -- # gen_conf 00:11:50.763 11:55:56 -- dd/common.sh@31 -- # xtrace_disable 00:11:50.763 11:55:56 -- common/autotest_common.sh@10 -- # set +x 00:11:50.763 { 00:11:50.763 "subsystems": [ 00:11:50.763 { 00:11:50.763 "subsystem": "bdev", 00:11:50.763 "config": [ 00:11:50.763 { 00:11:50.763 "params": { 00:11:50.763 "trtype": "pcie", 00:11:50.763 "traddr": "0000:00:06.0", 00:11:50.763 "name": "Nvme0" 00:11:50.763 }, 00:11:50.763 "method": "bdev_nvme_attach_controller" 00:11:50.763 }, 00:11:50.763 { 00:11:50.763 "method": "bdev_wait_for_examine" 00:11:50.763 } 00:11:50.763 ] 00:11:50.763 } 00:11:50.763 ] 00:11:50.763 } 00:11:50.763 [2024-11-29 11:55:56.103773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:50.763 [2024-11-29 11:55:56.104593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70031 ] 00:11:50.763 [2024-11-29 11:55:56.245303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.021 [2024-11-29 11:55:56.383270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.280  [2024-11-29T11:55:57.050Z] Copying: 48/48 [kB] (average 46 MBps) 00:11:51.539 00:11:51.539 11:55:56 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:51.539 11:55:56 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:11:51.539 11:55:56 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:51.539 11:55:56 -- dd/common.sh@11 -- # local nvme_ref= 00:11:51.539 11:55:56 -- dd/common.sh@12 -- # local size=49152 00:11:51.539 11:55:56 -- dd/common.sh@14 -- # local bs=1048576 00:11:51.539 11:55:56 -- dd/common.sh@15 -- # local count=1 00:11:51.539 11:55:56 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:51.539 11:55:56 -- dd/common.sh@18 -- # gen_conf 00:11:51.539 11:55:56 -- dd/common.sh@31 -- # xtrace_disable 00:11:51.539 11:55:56 -- common/autotest_common.sh@10 -- # set +x 00:11:51.539 [2024-11-29 11:55:57.024216] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:51.539 [2024-11-29 11:55:57.024368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70050 ] 00:11:51.539 { 00:11:51.539 "subsystems": [ 00:11:51.539 { 00:11:51.539 "subsystem": "bdev", 00:11:51.539 "config": [ 00:11:51.539 { 00:11:51.539 "params": { 00:11:51.539 "trtype": "pcie", 00:11:51.539 "traddr": "0000:00:06.0", 00:11:51.539 "name": "Nvme0" 00:11:51.539 }, 00:11:51.539 "method": "bdev_nvme_attach_controller" 00:11:51.539 }, 00:11:51.539 { 00:11:51.539 "method": "bdev_wait_for_examine" 00:11:51.539 } 00:11:51.539 ] 00:11:51.539 } 00:11:51.539 ] 00:11:51.539 } 00:11:51.797 [2024-11-29 11:55:57.163637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.797 [2024-11-29 11:55:57.295891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.055  [2024-11-29T11:55:58.133Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:11:52.622 00:11:52.622 00:11:52.622 real 0m19.225s 00:11:52.622 user 0m13.907s 00:11:52.622 sys 0m4.212s 00:11:52.622 11:55:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:52.622 11:55:57 -- common/autotest_common.sh@10 -- # set +x 00:11:52.622 ************************************ 00:11:52.622 END TEST dd_rw 00:11:52.622 ************************************ 00:11:52.622 11:55:57 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:11:52.622 11:55:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:52.622 11:55:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:52.622 11:55:57 -- common/autotest_common.sh@10 -- # set +x 00:11:52.623 ************************************ 00:11:52.623 START TEST dd_rw_offset 00:11:52.623 ************************************ 00:11:52.623 11:55:57 -- common/autotest_common.sh@1114 -- # basic_offset 00:11:52.623 11:55:57 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:11:52.623 11:55:57 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:11:52.623 11:55:57 -- dd/common.sh@98 -- # xtrace_disable 00:11:52.623 11:55:57 -- common/autotest_common.sh@10 -- # set +x 00:11:52.623 11:55:57 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:11:52.623 11:55:57 -- dd/basic_rw.sh@56 -- # data=5wzp66pjwk2oryzbbv7iy2ntqje96hamheo8tltcnz8373gyogvy2vhmyemql97jdoz451nei08lww5t2qpx5okt2boxvmhgmqznkemym3ag8l9962lhjv2imdaj03leekv7uvsxz0cumxkl1qoxkurnqr35xss6s04e1s9pwqzapxo2r6l2k9c6d7q6xs8yrtultedw5cq8kw90j44grpczdk4rpfadj0tm7flbrfcn27u43otm9jdvufsdqe66mucaxkmkjn49225ztkb315grfklhwgzjteo4fd5xwmtcd60e11qrew9y59yj8n157y4g8ardrm4rsr2dg8cxi0vnn5lg529ypr2thpabxyfqovbp27u5qwo8xvwaxy62v2fwiwoe4lq3ce9gbs2bgr4wce9kb0goszege4tavt51b6rpghvbdtfdvg9bx4plom5our99ktgrxijuo1jai85re3v5l4zl9kq95hyglbxijq1gdt1o93pnqrgfi7si9uv3tsrb5ye2w8kwr6kxouzv1x67jpwq5fneqyeq42likkug1c0rux0peqisjj34hbimym8116h8dhzq3ekqscc9nsmem0yz8xsdjvpnp0eq3jqhk6xmvjtpw8n3owf2517svkatwgdsm94atgwjkzptre8n2cb1benvyfzgm2go0v0lkzo884wieylsftz8nwgytz430oh3wjwcybnc1bze2l6dvb7t59oogd8sv8l8gy2p80r2wzpami59ijmyz03meddeiycjtr6p4gjbbu7wgzs23ylzx352xodktjb9pbqx1ru0xbruibapd6be991i4qyi2n29yqvsjflnf0ejxwix18hd30kj5p8btijpd6ae219klt9oreqfohsqwzei94wrygub48nb8o6dgcatv7595dnfqao05cchjvzpytm8bh3ivxjsi4x0f8af0mi7zfaxifg2k1n6fjpqn9hadmax01b4q53jgjn6m3zf8guqxowsjm9ly1xtmxbfwq1zy5nv3oylalhajs217ymtup6nonj8ijsmlpk7gcfwn0yz6g6y6557fuzov8uon5w2gu804utw6ti2wm3xw8uljwkg5ixl2sqdlntr3utzdo4o6elpkeh9zzvfqro8n1w9tu3axcgbk8xqa4dtvdf3e42x6bx0x3z1j5i96n9qhy72v49uk600pr95l4yzow44zknxpu1rj9kdo8u71j1duaivovzkmpkbcjna2vf0vh0ktfyrznhypufv5bmhb9rsldl848osyiu0f1wybzm8oosjtno1xmnsvcq1nbe6hla12gwxefpel9yw77crhajunoqcpj4pwjpq0mkkw36zk6uu7e4dqrc8tlv8kgm8qzpc8jc11pg2pup54u9tnmykxr6iw0m4niiv8964ckv3f1hswoykgemq2ke4y3tfx1viecrxo0qc1allly5zlpzl1tk0dujyd9ygr6i0squ1ourrno79im3i9rrfruti20ptcz7nw9zckgnouz08pa9eznwxh1ulna1hmy6emq9v4ewzbiv1nb9o2hgzbza8mhjngdh7kapoqe1ln9k28c7774esbrzhbl27vrjyaf1p9zf8dg9rmup2xhcdh1ky0m78g8j3mf1u488drwr5fna21ezoq4dj9rc2531cvk97wyeuwjae9kh9283wtdkbnefbig0s7941jwn6zpe3efmj0w0j2k6h75kx1usdh450xpk3dgs6iohsowcplajmiz1ywerk34nujb6hg9mqgea2y31iviidqf01zi0wau08s81wx711lw4hbowd49scfiypx3cqq3k24mwrudr4ypkh1ykbjzswbabye2dsp5jh84bc8xul098s107oyro1srguj8501v5godq9hnhmvjc7jmwjiekh9tgjgops91t2yklkn1q6dkoqeyab6aa0ivum65aoiaw1dg52zx0mivvvluj3zfq8xuqmf1vtu28ahnoc4psrhldjtngdi16tmkufrl0w2gfupuiu9nhg2g89a0hgkz10zg31ksu48aqd83q0mnej2omaqvbywqu78rbfdsi1dihcbbukjf5p40kip9bja9vwxz743dtsld6ms15fdpp0f6jpgkw9b86gmyusvtkdh8aj21eym2qhk6vxhuivht8u6xgx1fyzatqctaaz3vobs123su4g6qttxh3rs3aj6qfsfepqbatwk44h2repwprwunj90dmai5ufhkn4t9h660d2puictgphuplpjomo1u38tux1ccg44as5hqu32lgsyemebofi3vy5nroj9mnsgfd6ec6de9qwn9ab2296yccj2ty1nsgfz67dl7vitweznwxwigxc5kstxaytroxcqeee7i6cmgi2kqkozi7uul35q9unttz0bzth708g0ulzyoo5iyasz6t6lyo01o60hq4t3flc68lkc4ulor8u9hz9silljefy9tclcvfk0voln29k8pyzhg6smisiekfe6loqfskz8ne3jahqdlye8e27pyowzklep3ay310v1bzcc3k885obiwb53zf0itjnphdlcnf8m0iyszgxe78nb7xchnr81v6inqmui62btmekhoz22facztrp1hcebrs99mxckjyqpg45wa9hiewa7vofmcmj98e4494h0f9zj6oqqvujwe3soljnxg42gh3on1vcg11ok842169njte54y8qzmuqejhfew72ihjajo96u16grfk8xfshnf6e7zg5o9q3itfn8ivwjgbknxvyq2vhtgv921mp5pgmvgmmqxcnf4wnp7z572e4ueh285ms7pyf51yjo2etlh7iyhmnapip08rvc8s010x5j0fhc0l7de7s51i8hwoaghkre2cgcct0osayoq391h5xu4v7qgw708aubv2aqxomzcwkvutm9bt80enqn39cirxg0mvgb6qh0vge7tysx24mtzbq8acmo98for3rhk0ibi2c5igpqqg0h6r7jjtsp6t5o2wtgx0yfacimyavdqqp6eeepwnqawx2gwxt33suvljvgvl4fa5zssu0tk65j52jo167t24w6sohc53qffpwivpilc5ptqfdmp1jxvpr7i1uhm2app6pnzsm80pm2gs2pyxxjpkwkc9xbip7tk5urbe826s5jp3h82do4andor5bbfdutj263ibozkndy424g9s8nqjfcdwocp7d9b3xp29q6dfqvgpgq6sarhzvcedur4sknq30rofdz1y9kzii8ub0f1rn1xjscz8a4zpi0ldgmbm6b1c4xrp4q1nmyas8j7qv2n6jvssyd4lxqv1xa0cyd2m9rx4mk47i6gvixvs1co6b7zsiuyka5l3vn929rf64xruawj6tpknz22vnftlce3w7vp21lsebzsxknyyaxazj5dpakn9rzxygxucevn2x38rebj4rf14nw8o8ig4vk31tx118jolxh730moebv19e1fv17wdlmq0wb22hn9c2gv6guupro9lkzcvsh63knu2bwyfxba0vju0pqwqlhve7kc9vu5ywxjnkc8mf7d2kbg95vnmghdlwly0apl4jk7k1hi1txtzig45veqb5fzmg8g3sjxab776qtzvsk8aq37dcpy5hhs0snupxqfxs207e4ubedubipxnlnfnwyo5tpze49xwjcq7h4ex33c5lm1nbo957sggt20522e4qpt274ovnb8peg0a8sqdt3chb4meazcghlfmatzkkpq16oa8j0p1e34kdzfakgxbmkw8eguxb83nox2ngt31r9ewt1g4n9x7lb0txiadgtehlcbtp2v6ou1hzrqz3anpefw8b1rt9xu7tbulqhvpvxb2n0vymvagrdeq3w26rmthxyuaf2cwxazmvnw6yp2lzks66m763eovnuh7myqcg3os0a7dn214aiv0tcd647m675nzqrk87gwmmrf9mhujyiqmwr5ba5qzzltbti9o5ufggj3tn5wtojjcq7seccnsyjvga34ilv0qlgu75ush1kkhulvixyv6czqukke3omdwsfvhpuf9h 00:11:52.623 11:55:57 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:11:52.623 11:55:57 -- dd/basic_rw.sh@59 -- # gen_conf 00:11:52.623 11:55:57 -- dd/common.sh@31 -- # xtrace_disable 00:11:52.623 11:55:57 -- common/autotest_common.sh@10 -- # set +x 00:11:52.623 [2024-11-29 11:55:58.035535] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:52.623 [2024-11-29 11:55:58.036373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70085 ] 00:11:52.623 { 00:11:52.623 "subsystems": [ 00:11:52.623 { 00:11:52.623 "subsystem": "bdev", 00:11:52.623 "config": [ 00:11:52.623 { 00:11:52.623 "params": { 00:11:52.623 "trtype": "pcie", 00:11:52.623 "traddr": "0000:00:06.0", 00:11:52.623 "name": "Nvme0" 00:11:52.623 }, 00:11:52.623 "method": "bdev_nvme_attach_controller" 00:11:52.623 }, 00:11:52.623 { 00:11:52.623 "method": "bdev_wait_for_examine" 00:11:52.623 } 00:11:52.623 ] 00:11:52.623 } 00:11:52.623 ] 00:11:52.623 } 00:11:52.881 [2024-11-29 11:55:58.179016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.881 [2024-11-29 11:55:58.320485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.140  [2024-11-29T11:55:58.909Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:11:53.398 00:11:53.398 11:55:58 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:11:53.398 11:55:58 -- dd/basic_rw.sh@65 -- # gen_conf 00:11:53.398 11:55:58 -- dd/common.sh@31 -- # xtrace_disable 00:11:53.398 11:55:58 -- common/autotest_common.sh@10 -- # set +x 00:11:53.656 [2024-11-29 11:55:58.949570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:53.656 [2024-11-29 11:55:58.949708] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70103 ] 00:11:53.656 { 00:11:53.656 "subsystems": [ 00:11:53.656 { 00:11:53.656 "subsystem": "bdev", 00:11:53.656 "config": [ 00:11:53.656 { 00:11:53.656 "params": { 00:11:53.656 "trtype": "pcie", 00:11:53.656 "traddr": "0000:00:06.0", 00:11:53.656 "name": "Nvme0" 00:11:53.656 }, 00:11:53.656 "method": "bdev_nvme_attach_controller" 00:11:53.656 }, 00:11:53.656 { 00:11:53.656 "method": "bdev_wait_for_examine" 00:11:53.656 } 00:11:53.656 ] 00:11:53.656 } 00:11:53.656 ] 00:11:53.656 } 00:11:53.656 [2024-11-29 11:55:59.086262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.915 [2024-11-29 11:55:59.223815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.915  [2024-11-29T11:55:59.994Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:11:54.483 00:11:54.483 11:55:59 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:11:54.483 ************************************ 00:11:54.483 END TEST dd_rw_offset 00:11:54.483 ************************************ 00:11:54.484 11:55:59 -- dd/basic_rw.sh@72 -- # [[ 5wzp66pjwk2oryzbbv7iy2ntqje96hamheo8tltcnz8373gyogvy2vhmyemql97jdoz451nei08lww5t2qpx5okt2boxvmhgmqznkemym3ag8l9962lhjv2imdaj03leekv7uvsxz0cumxkl1qoxkurnqr35xss6s04e1s9pwqzapxo2r6l2k9c6d7q6xs8yrtultedw5cq8kw90j44grpczdk4rpfadj0tm7flbrfcn27u43otm9jdvufsdqe66mucaxkmkjn49225ztkb315grfklhwgzjteo4fd5xwmtcd60e11qrew9y59yj8n157y4g8ardrm4rsr2dg8cxi0vnn5lg529ypr2thpabxyfqovbp27u5qwo8xvwaxy62v2fwiwoe4lq3ce9gbs2bgr4wce9kb0goszege4tavt51b6rpghvbdtfdvg9bx4plom5our99ktgrxijuo1jai85re3v5l4zl9kq95hyglbxijq1gdt1o93pnqrgfi7si9uv3tsrb5ye2w8kwr6kxouzv1x67jpwq5fneqyeq42likkug1c0rux0peqisjj34hbimym8116h8dhzq3ekqscc9nsmem0yz8xsdjvpnp0eq3jqhk6xmvjtpw8n3owf2517svkatwgdsm94atgwjkzptre8n2cb1benvyfzgm2go0v0lkzo884wieylsftz8nwgytz430oh3wjwcybnc1bze2l6dvb7t59oogd8sv8l8gy2p80r2wzpami59ijmyz03meddeiycjtr6p4gjbbu7wgzs23ylzx352xodktjb9pbqx1ru0xbruibapd6be991i4qyi2n29yqvsjflnf0ejxwix18hd30kj5p8btijpd6ae219klt9oreqfohsqwzei94wrygub48nb8o6dgcatv7595dnfqao05cchjvzpytm8bh3ivxjsi4x0f8af0mi7zfaxifg2k1n6fjpqn9hadmax01b4q53jgjn6m3zf8guqxowsjm9ly1xtmxbfwq1zy5nv3oylalhajs217ymtup6nonj8ijsmlpk7gcfwn0yz6g6y6557fuzov8uon5w2gu804utw6ti2wm3xw8uljwkg5ixl2sqdlntr3utzdo4o6elpkeh9zzvfqro8n1w9tu3axcgbk8xqa4dtvdf3e42x6bx0x3z1j5i96n9qhy72v49uk600pr95l4yzow44zknxpu1rj9kdo8u71j1duaivovzkmpkbcjna2vf0vh0ktfyrznhypufv5bmhb9rsldl848osyiu0f1wybzm8oosjtno1xmnsvcq1nbe6hla12gwxefpel9yw77crhajunoqcpj4pwjpq0mkkw36zk6uu7e4dqrc8tlv8kgm8qzpc8jc11pg2pup54u9tnmykxr6iw0m4niiv8964ckv3f1hswoykgemq2ke4y3tfx1viecrxo0qc1allly5zlpzl1tk0dujyd9ygr6i0squ1ourrno79im3i9rrfruti20ptcz7nw9zckgnouz08pa9eznwxh1ulna1hmy6emq9v4ewzbiv1nb9o2hgzbza8mhjngdh7kapoqe1ln9k28c7774esbrzhbl27vrjyaf1p9zf8dg9rmup2xhcdh1ky0m78g8j3mf1u488drwr5fna21ezoq4dj9rc2531cvk97wyeuwjae9kh9283wtdkbnefbig0s7941jwn6zpe3efmj0w0j2k6h75kx1usdh450xpk3dgs6iohsowcplajmiz1ywerk34nujb6hg9mqgea2y31iviidqf01zi0wau08s81wx711lw4hbowd49scfiypx3cqq3k24mwrudr4ypkh1ykbjzswbabye2dsp5jh84bc8xul098s107oyro1srguj8501v5godq9hnhmvjc7jmwjiekh9tgjgops91t2yklkn1q6dkoqeyab6aa0ivum65aoiaw1dg52zx0mivvvluj3zfq8xuqmf1vtu28ahnoc4psrhldjtngdi16tmkufrl0w2gfupuiu9nhg2g89a0hgkz10zg31ksu48aqd83q0mnej2omaqvbywqu78rbfdsi1dihcbbukjf5p40kip9bja9vwxz743dtsld6ms15fdpp0f6jpgkw9b86gmyusvtkdh8aj21eym2qhk6vxhuivht8u6xgx1fyzatqctaaz3vobs123su4g6qttxh3rs3aj6qfsfepqbatwk44h2repwprwunj90dmai5ufhkn4t9h660d2puictgphuplpjomo1u38tux1ccg44as5hqu32lgsyemebofi3vy5nroj9mnsgfd6ec6de9qwn9ab2296yccj2ty1nsgfz67dl7vitweznwxwigxc5kstxaytroxcqeee7i6cmgi2kqkozi7uul35q9unttz0bzth708g0ulzyoo5iyasz6t6lyo01o60hq4t3flc68lkc4ulor8u9hz9silljefy9tclcvfk0voln29k8pyzhg6smisiekfe6loqfskz8ne3jahqdlye8e27pyowzklep3ay310v1bzcc3k885obiwb53zf0itjnphdlcnf8m0iyszgxe78nb7xchnr81v6inqmui62btmekhoz22facztrp1hcebrs99mxckjyqpg45wa9hiewa7vofmcmj98e4494h0f9zj6oqqvujwe3soljnxg42gh3on1vcg11ok842169njte54y8qzmuqejhfew72ihjajo96u16grfk8xfshnf6e7zg5o9q3itfn8ivwjgbknxvyq2vhtgv921mp5pgmvgmmqxcnf4wnp7z572e4ueh285ms7pyf51yjo2etlh7iyhmnapip08rvc8s010x5j0fhc0l7de7s51i8hwoaghkre2cgcct0osayoq391h5xu4v7qgw708aubv2aqxomzcwkvutm9bt80enqn39cirxg0mvgb6qh0vge7tysx24mtzbq8acmo98for3rhk0ibi2c5igpqqg0h6r7jjtsp6t5o2wtgx0yfacimyavdqqp6eeepwnqawx2gwxt33suvljvgvl4fa5zssu0tk65j52jo167t24w6sohc53qffpwivpilc5ptqfdmp1jxvpr7i1uhm2app6pnzsm80pm2gs2pyxxjpkwkc9xbip7tk5urbe826s5jp3h82do4andor5bbfdutj263ibozkndy424g9s8nqjfcdwocp7d9b3xp29q6dfqvgpgq6sarhzvcedur4sknq30rofdz1y9kzii8ub0f1rn1xjscz8a4zpi0ldgmbm6b1c4xrp4q1nmyas8j7qv2n6jvssyd4lxqv1xa0cyd2m9rx4mk47i6gvixvs1co6b7zsiuyka5l3vn929rf64xruawj6tpknz22vnftlce3w7vp21lsebzsxknyyaxazj5dpakn9rzxygxucevn2x38rebj4rf14nw8o8ig4vk31tx118jolxh730moebv19e1fv17wdlmq0wb22hn9c2gv6guupro9lkzcvsh63knu2bwyfxba0vju0pqwqlhve7kc9vu5ywxjnkc8mf7d2kbg95vnmghdlwly0apl4jk7k1hi1txtzig45veqb5fzmg8g3sjxab776qtzvsk8aq37dcpy5hhs0snupxqfxs207e4ubedubipxnlnfnwyo5tpze49xwjcq7h4ex33c5lm1nbo957sggt20522e4qpt274ovnb8peg0a8sqdt3chb4meazcghlfmatzkkpq16oa8j0p1e34kdzfakgxbmkw8eguxb83nox2ngt31r9ewt1g4n9x7lb0txiadgtehlcbtp2v6ou1hzrqz3anpefw8b1rt9xu7tbulqhvpvxb2n0vymvagrdeq3w26rmthxyuaf2cwxazmvnw6yp2lzks66m763eovnuh7myqcg3os0a7dn214aiv0tcd647m675nzqrk87gwmmrf9mhujyiqmwr5ba5qzzltbti9o5ufggj3tn5wtojjcq7seccnsyjvga34ilv0qlgu75ush1kkhulvixyv6czqukke3omdwsfvhpuf9h == \5\w\z\p\6\6\p\j\w\k\2\o\r\y\z\b\b\v\7\i\y\2\n\t\q\j\e\9\6\h\a\m\h\e\o\8\t\l\t\c\n\z\8\3\7\3\g\y\o\g\v\y\2\v\h\m\y\e\m\q\l\9\7\j\d\o\z\4\5\1\n\e\i\0\8\l\w\w\5\t\2\q\p\x\5\o\k\t\2\b\o\x\v\m\h\g\m\q\z\n\k\e\m\y\m\3\a\g\8\l\9\9\6\2\l\h\j\v\2\i\m\d\a\j\0\3\l\e\e\k\v\7\u\v\s\x\z\0\c\u\m\x\k\l\1\q\o\x\k\u\r\n\q\r\3\5\x\s\s\6\s\0\4\e\1\s\9\p\w\q\z\a\p\x\o\2\r\6\l\2\k\9\c\6\d\7\q\6\x\s\8\y\r\t\u\l\t\e\d\w\5\c\q\8\k\w\9\0\j\4\4\g\r\p\c\z\d\k\4\r\p\f\a\d\j\0\t\m\7\f\l\b\r\f\c\n\2\7\u\4\3\o\t\m\9\j\d\v\u\f\s\d\q\e\6\6\m\u\c\a\x\k\m\k\j\n\4\9\2\2\5\z\t\k\b\3\1\5\g\r\f\k\l\h\w\g\z\j\t\e\o\4\f\d\5\x\w\m\t\c\d\6\0\e\1\1\q\r\e\w\9\y\5\9\y\j\8\n\1\5\7\y\4\g\8\a\r\d\r\m\4\r\s\r\2\d\g\8\c\x\i\0\v\n\n\5\l\g\5\2\9\y\p\r\2\t\h\p\a\b\x\y\f\q\o\v\b\p\2\7\u\5\q\w\o\8\x\v\w\a\x\y\6\2\v\2\f\w\i\w\o\e\4\l\q\3\c\e\9\g\b\s\2\b\g\r\4\w\c\e\9\k\b\0\g\o\s\z\e\g\e\4\t\a\v\t\5\1\b\6\r\p\g\h\v\b\d\t\f\d\v\g\9\b\x\4\p\l\o\m\5\o\u\r\9\9\k\t\g\r\x\i\j\u\o\1\j\a\i\8\5\r\e\3\v\5\l\4\z\l\9\k\q\9\5\h\y\g\l\b\x\i\j\q\1\g\d\t\1\o\9\3\p\n\q\r\g\f\i\7\s\i\9\u\v\3\t\s\r\b\5\y\e\2\w\8\k\w\r\6\k\x\o\u\z\v\1\x\6\7\j\p\w\q\5\f\n\e\q\y\e\q\4\2\l\i\k\k\u\g\1\c\0\r\u\x\0\p\e\q\i\s\j\j\3\4\h\b\i\m\y\m\8\1\1\6\h\8\d\h\z\q\3\e\k\q\s\c\c\9\n\s\m\e\m\0\y\z\8\x\s\d\j\v\p\n\p\0\e\q\3\j\q\h\k\6\x\m\v\j\t\p\w\8\n\3\o\w\f\2\5\1\7\s\v\k\a\t\w\g\d\s\m\9\4\a\t\g\w\j\k\z\p\t\r\e\8\n\2\c\b\1\b\e\n\v\y\f\z\g\m\2\g\o\0\v\0\l\k\z\o\8\8\4\w\i\e\y\l\s\f\t\z\8\n\w\g\y\t\z\4\3\0\o\h\3\w\j\w\c\y\b\n\c\1\b\z\e\2\l\6\d\v\b\7\t\5\9\o\o\g\d\8\s\v\8\l\8\g\y\2\p\8\0\r\2\w\z\p\a\m\i\5\9\i\j\m\y\z\0\3\m\e\d\d\e\i\y\c\j\t\r\6\p\4\g\j\b\b\u\7\w\g\z\s\2\3\y\l\z\x\3\5\2\x\o\d\k\t\j\b\9\p\b\q\x\1\r\u\0\x\b\r\u\i\b\a\p\d\6\b\e\9\9\1\i\4\q\y\i\2\n\2\9\y\q\v\s\j\f\l\n\f\0\e\j\x\w\i\x\1\8\h\d\3\0\k\j\5\p\8\b\t\i\j\p\d\6\a\e\2\1\9\k\l\t\9\o\r\e\q\f\o\h\s\q\w\z\e\i\9\4\w\r\y\g\u\b\4\8\n\b\8\o\6\d\g\c\a\t\v\7\5\9\5\d\n\f\q\a\o\0\5\c\c\h\j\v\z\p\y\t\m\8\b\h\3\i\v\x\j\s\i\4\x\0\f\8\a\f\0\m\i\7\z\f\a\x\i\f\g\2\k\1\n\6\f\j\p\q\n\9\h\a\d\m\a\x\0\1\b\4\q\5\3\j\g\j\n\6\m\3\z\f\8\g\u\q\x\o\w\s\j\m\9\l\y\1\x\t\m\x\b\f\w\q\1\z\y\5\n\v\3\o\y\l\a\l\h\a\j\s\2\1\7\y\m\t\u\p\6\n\o\n\j\8\i\j\s\m\l\p\k\7\g\c\f\w\n\0\y\z\6\g\6\y\6\5\5\7\f\u\z\o\v\8\u\o\n\5\w\2\g\u\8\0\4\u\t\w\6\t\i\2\w\m\3\x\w\8\u\l\j\w\k\g\5\i\x\l\2\s\q\d\l\n\t\r\3\u\t\z\d\o\4\o\6\e\l\p\k\e\h\9\z\z\v\f\q\r\o\8\n\1\w\9\t\u\3\a\x\c\g\b\k\8\x\q\a\4\d\t\v\d\f\3\e\4\2\x\6\b\x\0\x\3\z\1\j\5\i\9\6\n\9\q\h\y\7\2\v\4\9\u\k\6\0\0\p\r\9\5\l\4\y\z\o\w\4\4\z\k\n\x\p\u\1\r\j\9\k\d\o\8\u\7\1\j\1\d\u\a\i\v\o\v\z\k\m\p\k\b\c\j\n\a\2\v\f\0\v\h\0\k\t\f\y\r\z\n\h\y\p\u\f\v\5\b\m\h\b\9\r\s\l\d\l\8\4\8\o\s\y\i\u\0\f\1\w\y\b\z\m\8\o\o\s\j\t\n\o\1\x\m\n\s\v\c\q\1\n\b\e\6\h\l\a\1\2\g\w\x\e\f\p\e\l\9\y\w\7\7\c\r\h\a\j\u\n\o\q\c\p\j\4\p\w\j\p\q\0\m\k\k\w\3\6\z\k\6\u\u\7\e\4\d\q\r\c\8\t\l\v\8\k\g\m\8\q\z\p\c\8\j\c\1\1\p\g\2\p\u\p\5\4\u\9\t\n\m\y\k\x\r\6\i\w\0\m\4\n\i\i\v\8\9\6\4\c\k\v\3\f\1\h\s\w\o\y\k\g\e\m\q\2\k\e\4\y\3\t\f\x\1\v\i\e\c\r\x\o\0\q\c\1\a\l\l\l\y\5\z\l\p\z\l\1\t\k\0\d\u\j\y\d\9\y\g\r\6\i\0\s\q\u\1\o\u\r\r\n\o\7\9\i\m\3\i\9\r\r\f\r\u\t\i\2\0\p\t\c\z\7\n\w\9\z\c\k\g\n\o\u\z\0\8\p\a\9\e\z\n\w\x\h\1\u\l\n\a\1\h\m\y\6\e\m\q\9\v\4\e\w\z\b\i\v\1\n\b\9\o\2\h\g\z\b\z\a\8\m\h\j\n\g\d\h\7\k\a\p\o\q\e\1\l\n\9\k\2\8\c\7\7\7\4\e\s\b\r\z\h\b\l\2\7\v\r\j\y\a\f\1\p\9\z\f\8\d\g\9\r\m\u\p\2\x\h\c\d\h\1\k\y\0\m\7\8\g\8\j\3\m\f\1\u\4\8\8\d\r\w\r\5\f\n\a\2\1\e\z\o\q\4\d\j\9\r\c\2\5\3\1\c\v\k\9\7\w\y\e\u\w\j\a\e\9\k\h\9\2\8\3\w\t\d\k\b\n\e\f\b\i\g\0\s\7\9\4\1\j\w\n\6\z\p\e\3\e\f\m\j\0\w\0\j\2\k\6\h\7\5\k\x\1\u\s\d\h\4\5\0\x\p\k\3\d\g\s\6\i\o\h\s\o\w\c\p\l\a\j\m\i\z\1\y\w\e\r\k\3\4\n\u\j\b\6\h\g\9\m\q\g\e\a\2\y\3\1\i\v\i\i\d\q\f\0\1\z\i\0\w\a\u\0\8\s\8\1\w\x\7\1\1\l\w\4\h\b\o\w\d\4\9\s\c\f\i\y\p\x\3\c\q\q\3\k\2\4\m\w\r\u\d\r\4\y\p\k\h\1\y\k\b\j\z\s\w\b\a\b\y\e\2\d\s\p\5\j\h\8\4\b\c\8\x\u\l\0\9\8\s\1\0\7\o\y\r\o\1\s\r\g\u\j\8\5\0\1\v\5\g\o\d\q\9\h\n\h\m\v\j\c\7\j\m\w\j\i\e\k\h\9\t\g\j\g\o\p\s\9\1\t\2\y\k\l\k\n\1\q\6\d\k\o\q\e\y\a\b\6\a\a\0\i\v\u\m\6\5\a\o\i\a\w\1\d\g\5\2\z\x\0\m\i\v\v\v\l\u\j\3\z\f\q\8\x\u\q\m\f\1\v\t\u\2\8\a\h\n\o\c\4\p\s\r\h\l\d\j\t\n\g\d\i\1\6\t\m\k\u\f\r\l\0\w\2\g\f\u\p\u\i\u\9\n\h\g\2\g\8\9\a\0\h\g\k\z\1\0\z\g\3\1\k\s\u\4\8\a\q\d\8\3\q\0\m\n\e\j\2\o\m\a\q\v\b\y\w\q\u\7\8\r\b\f\d\s\i\1\d\i\h\c\b\b\u\k\j\f\5\p\4\0\k\i\p\9\b\j\a\9\v\w\x\z\7\4\3\d\t\s\l\d\6\m\s\1\5\f\d\p\p\0\f\6\j\p\g\k\w\9\b\8\6\g\m\y\u\s\v\t\k\d\h\8\a\j\2\1\e\y\m\2\q\h\k\6\v\x\h\u\i\v\h\t\8\u\6\x\g\x\1\f\y\z\a\t\q\c\t\a\a\z\3\v\o\b\s\1\2\3\s\u\4\g\6\q\t\t\x\h\3\r\s\3\a\j\6\q\f\s\f\e\p\q\b\a\t\w\k\4\4\h\2\r\e\p\w\p\r\w\u\n\j\9\0\d\m\a\i\5\u\f\h\k\n\4\t\9\h\6\6\0\d\2\p\u\i\c\t\g\p\h\u\p\l\p\j\o\m\o\1\u\3\8\t\u\x\1\c\c\g\4\4\a\s\5\h\q\u\3\2\l\g\s\y\e\m\e\b\o\f\i\3\v\y\5\n\r\o\j\9\m\n\s\g\f\d\6\e\c\6\d\e\9\q\w\n\9\a\b\2\2\9\6\y\c\c\j\2\t\y\1\n\s\g\f\z\6\7\d\l\7\v\i\t\w\e\z\n\w\x\w\i\g\x\c\5\k\s\t\x\a\y\t\r\o\x\c\q\e\e\e\7\i\6\c\m\g\i\2\k\q\k\o\z\i\7\u\u\l\3\5\q\9\u\n\t\t\z\0\b\z\t\h\7\0\8\g\0\u\l\z\y\o\o\5\i\y\a\s\z\6\t\6\l\y\o\0\1\o\6\0\h\q\4\t\3\f\l\c\6\8\l\k\c\4\u\l\o\r\8\u\9\h\z\9\s\i\l\l\j\e\f\y\9\t\c\l\c\v\f\k\0\v\o\l\n\2\9\k\8\p\y\z\h\g\6\s\m\i\s\i\e\k\f\e\6\l\o\q\f\s\k\z\8\n\e\3\j\a\h\q\d\l\y\e\8\e\2\7\p\y\o\w\z\k\l\e\p\3\a\y\3\1\0\v\1\b\z\c\c\3\k\8\8\5\o\b\i\w\b\5\3\z\f\0\i\t\j\n\p\h\d\l\c\n\f\8\m\0\i\y\s\z\g\x\e\7\8\n\b\7\x\c\h\n\r\8\1\v\6\i\n\q\m\u\i\6\2\b\t\m\e\k\h\o\z\2\2\f\a\c\z\t\r\p\1\h\c\e\b\r\s\9\9\m\x\c\k\j\y\q\p\g\4\5\w\a\9\h\i\e\w\a\7\v\o\f\m\c\m\j\9\8\e\4\4\9\4\h\0\f\9\z\j\6\o\q\q\v\u\j\w\e\3\s\o\l\j\n\x\g\4\2\g\h\3\o\n\1\v\c\g\1\1\o\k\8\4\2\1\6\9\n\j\t\e\5\4\y\8\q\z\m\u\q\e\j\h\f\e\w\7\2\i\h\j\a\j\o\9\6\u\1\6\g\r\f\k\8\x\f\s\h\n\f\6\e\7\z\g\5\o\9\q\3\i\t\f\n\8\i\v\w\j\g\b\k\n\x\v\y\q\2\v\h\t\g\v\9\2\1\m\p\5\p\g\m\v\g\m\m\q\x\c\n\f\4\w\n\p\7\z\5\7\2\e\4\u\e\h\2\8\5\m\s\7\p\y\f\5\1\y\j\o\2\e\t\l\h\7\i\y\h\m\n\a\p\i\p\0\8\r\v\c\8\s\0\1\0\x\5\j\0\f\h\c\0\l\7\d\e\7\s\5\1\i\8\h\w\o\a\g\h\k\r\e\2\c\g\c\c\t\0\o\s\a\y\o\q\3\9\1\h\5\x\u\4\v\7\q\g\w\7\0\8\a\u\b\v\2\a\q\x\o\m\z\c\w\k\v\u\t\m\9\b\t\8\0\e\n\q\n\3\9\c\i\r\x\g\0\m\v\g\b\6\q\h\0\v\g\e\7\t\y\s\x\2\4\m\t\z\b\q\8\a\c\m\o\9\8\f\o\r\3\r\h\k\0\i\b\i\2\c\5\i\g\p\q\q\g\0\h\6\r\7\j\j\t\s\p\6\t\5\o\2\w\t\g\x\0\y\f\a\c\i\m\y\a\v\d\q\q\p\6\e\e\e\p\w\n\q\a\w\x\2\g\w\x\t\3\3\s\u\v\l\j\v\g\v\l\4\f\a\5\z\s\s\u\0\t\k\6\5\j\5\2\j\o\1\6\7\t\2\4\w\6\s\o\h\c\5\3\q\f\f\p\w\i\v\p\i\l\c\5\p\t\q\f\d\m\p\1\j\x\v\p\r\7\i\1\u\h\m\2\a\p\p\6\p\n\z\s\m\8\0\p\m\2\g\s\2\p\y\x\x\j\p\k\w\k\c\9\x\b\i\p\7\t\k\5\u\r\b\e\8\2\6\s\5\j\p\3\h\8\2\d\o\4\a\n\d\o\r\5\b\b\f\d\u\t\j\2\6\3\i\b\o\z\k\n\d\y\4\2\4\g\9\s\8\n\q\j\f\c\d\w\o\c\p\7\d\9\b\3\x\p\2\9\q\6\d\f\q\v\g\p\g\q\6\s\a\r\h\z\v\c\e\d\u\r\4\s\k\n\q\3\0\r\o\f\d\z\1\y\9\k\z\i\i\8\u\b\0\f\1\r\n\1\x\j\s\c\z\8\a\4\z\p\i\0\l\d\g\m\b\m\6\b\1\c\4\x\r\p\4\q\1\n\m\y\a\s\8\j\7\q\v\2\n\6\j\v\s\s\y\d\4\l\x\q\v\1\x\a\0\c\y\d\2\m\9\r\x\4\m\k\4\7\i\6\g\v\i\x\v\s\1\c\o\6\b\7\z\s\i\u\y\k\a\5\l\3\v\n\9\2\9\r\f\6\4\x\r\u\a\w\j\6\t\p\k\n\z\2\2\v\n\f\t\l\c\e\3\w\7\v\p\2\1\l\s\e\b\z\s\x\k\n\y\y\a\x\a\z\j\5\d\p\a\k\n\9\r\z\x\y\g\x\u\c\e\v\n\2\x\3\8\r\e\b\j\4\r\f\1\4\n\w\8\o\8\i\g\4\v\k\3\1\t\x\1\1\8\j\o\l\x\h\7\3\0\m\o\e\b\v\1\9\e\1\f\v\1\7\w\d\l\m\q\0\w\b\2\2\h\n\9\c\2\g\v\6\g\u\u\p\r\o\9\l\k\z\c\v\s\h\6\3\k\n\u\2\b\w\y\f\x\b\a\0\v\j\u\0\p\q\w\q\l\h\v\e\7\k\c\9\v\u\5\y\w\x\j\n\k\c\8\m\f\7\d\2\k\b\g\9\5\v\n\m\g\h\d\l\w\l\y\0\a\p\l\4\j\k\7\k\1\h\i\1\t\x\t\z\i\g\4\5\v\e\q\b\5\f\z\m\g\8\g\3\s\j\x\a\b\7\7\6\q\t\z\v\s\k\8\a\q\3\7\d\c\p\y\5\h\h\s\0\s\n\u\p\x\q\f\x\s\2\0\7\e\4\u\b\e\d\u\b\i\p\x\n\l\n\f\n\w\y\o\5\t\p\z\e\4\9\x\w\j\c\q\7\h\4\e\x\3\3\c\5\l\m\1\n\b\o\9\5\7\s\g\g\t\2\0\5\2\2\e\4\q\p\t\2\7\4\o\v\n\b\8\p\e\g\0\a\8\s\q\d\t\3\c\h\b\4\m\e\a\z\c\g\h\l\f\m\a\t\z\k\k\p\q\1\6\o\a\8\j\0\p\1\e\3\4\k\d\z\f\a\k\g\x\b\m\k\w\8\e\g\u\x\b\8\3\n\o\x\2\n\g\t\3\1\r\9\e\w\t\1\g\4\n\9\x\7\l\b\0\t\x\i\a\d\g\t\e\h\l\c\b\t\p\2\v\6\o\u\1\h\z\r\q\z\3\a\n\p\e\f\w\8\b\1\r\t\9\x\u\7\t\b\u\l\q\h\v\p\v\x\b\2\n\0\v\y\m\v\a\g\r\d\e\q\3\w\2\6\r\m\t\h\x\y\u\a\f\2\c\w\x\a\z\m\v\n\w\6\y\p\2\l\z\k\s\6\6\m\7\6\3\e\o\v\n\u\h\7\m\y\q\c\g\3\o\s\0\a\7\d\n\2\1\4\a\i\v\0\t\c\d\6\4\7\m\6\7\5\n\z\q\r\k\8\7\g\w\m\m\r\f\9\m\h\u\j\y\i\q\m\w\r\5\b\a\5\q\z\z\l\t\b\t\i\9\o\5\u\f\g\g\j\3\t\n\5\w\t\o\j\j\c\q\7\s\e\c\c\n\s\y\j\v\g\a\3\4\i\l\v\0\q\l\g\u\7\5\u\s\h\1\k\k\h\u\l\v\i\x\y\v\6\c\z\q\u\k\k\e\3\o\m\d\w\s\f\v\h\p\u\f\9\h ]] 00:11:54.484 00:11:54.484 real 0m1.826s 00:11:54.484 user 0m1.245s 00:11:54.484 sys 0m0.454s 00:11:54.484 11:55:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:54.484 11:55:59 -- common/autotest_common.sh@10 -- # set +x 00:11:54.484 11:55:59 -- dd/basic_rw.sh@1 -- # cleanup 00:11:54.484 11:55:59 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:11:54.484 11:55:59 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:54.484 11:55:59 -- dd/common.sh@11 -- # local nvme_ref= 00:11:54.484 11:55:59 -- dd/common.sh@12 -- # local size=0xffff 00:11:54.484 11:55:59 -- dd/common.sh@14 -- # local bs=1048576 00:11:54.484 11:55:59 -- dd/common.sh@15 -- # local count=1 00:11:54.484 11:55:59 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:54.484 11:55:59 -- dd/common.sh@18 -- # gen_conf 00:11:54.484 11:55:59 -- dd/common.sh@31 -- # xtrace_disable 00:11:54.484 11:55:59 -- common/autotest_common.sh@10 -- # set +x 00:11:54.484 [2024-11-29 11:55:59.859468] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:54.484 [2024-11-29 11:55:59.859650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70131 ] 00:11:54.484 { 00:11:54.484 "subsystems": [ 00:11:54.484 { 00:11:54.484 "subsystem": "bdev", 00:11:54.484 "config": [ 00:11:54.484 { 00:11:54.484 "params": { 00:11:54.484 "trtype": "pcie", 00:11:54.484 "traddr": "0000:00:06.0", 00:11:54.484 "name": "Nvme0" 00:11:54.484 }, 00:11:54.484 "method": "bdev_nvme_attach_controller" 00:11:54.484 }, 00:11:54.484 { 00:11:54.484 "method": "bdev_wait_for_examine" 00:11:54.484 } 00:11:54.484 ] 00:11:54.484 } 00:11:54.484 ] 00:11:54.484 } 00:11:54.743 [2024-11-29 11:56:00.000566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.743 [2024-11-29 11:56:00.134816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.002  [2024-11-29T11:56:00.772Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:55.261 00:11:55.261 11:56:00 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:55.261 00:11:55.261 real 0m23.340s 00:11:55.261 user 0m16.577s 00:11:55.261 sys 0m5.334s 00:11:55.261 11:56:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:55.261 11:56:00 -- common/autotest_common.sh@10 -- # set +x 00:11:55.261 ************************************ 00:11:55.261 END TEST spdk_dd_basic_rw 00:11:55.261 ************************************ 00:11:55.261 11:56:00 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:11:55.261 11:56:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:55.261 11:56:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:55.261 11:56:00 -- common/autotest_common.sh@10 -- # set +x 00:11:55.261 ************************************ 00:11:55.261 START TEST spdk_dd_posix 00:11:55.261 ************************************ 00:11:55.261 11:56:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:11:55.521 * Looking for test storage... 00:11:55.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:55.521 11:56:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:55.521 11:56:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:55.521 11:56:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:55.521 11:56:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:55.521 11:56:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:55.521 11:56:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:55.521 11:56:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:55.521 11:56:00 -- scripts/common.sh@335 -- # IFS=.-: 00:11:55.521 11:56:00 -- scripts/common.sh@335 -- # read -ra ver1 00:11:55.521 11:56:00 -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.521 11:56:00 -- scripts/common.sh@336 -- # read -ra ver2 00:11:55.521 11:56:00 -- scripts/common.sh@337 -- # local 'op=<' 00:11:55.521 11:56:00 -- scripts/common.sh@339 -- # ver1_l=2 00:11:55.521 11:56:00 -- scripts/common.sh@340 -- # ver2_l=1 00:11:55.521 11:56:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:55.521 11:56:00 -- scripts/common.sh@343 -- # case "$op" in 00:11:55.521 11:56:00 -- scripts/common.sh@344 -- # : 1 00:11:55.521 11:56:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:55.521 11:56:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.521 11:56:00 -- scripts/common.sh@364 -- # decimal 1 00:11:55.521 11:56:00 -- scripts/common.sh@352 -- # local d=1 00:11:55.521 11:56:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.521 11:56:00 -- scripts/common.sh@354 -- # echo 1 00:11:55.521 11:56:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:55.521 11:56:00 -- scripts/common.sh@365 -- # decimal 2 00:11:55.521 11:56:00 -- scripts/common.sh@352 -- # local d=2 00:11:55.521 11:56:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.521 11:56:00 -- scripts/common.sh@354 -- # echo 2 00:11:55.521 11:56:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:55.521 11:56:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:55.521 11:56:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:55.521 11:56:00 -- scripts/common.sh@367 -- # return 0 00:11:55.521 11:56:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.521 11:56:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:55.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.522 --rc genhtml_branch_coverage=1 00:11:55.522 --rc genhtml_function_coverage=1 00:11:55.522 --rc genhtml_legend=1 00:11:55.522 --rc geninfo_all_blocks=1 00:11:55.522 --rc geninfo_unexecuted_blocks=1 00:11:55.522 00:11:55.522 ' 00:11:55.522 11:56:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:55.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.522 --rc genhtml_branch_coverage=1 00:11:55.522 --rc genhtml_function_coverage=1 00:11:55.522 --rc genhtml_legend=1 00:11:55.522 --rc geninfo_all_blocks=1 00:11:55.522 --rc geninfo_unexecuted_blocks=1 00:11:55.522 00:11:55.522 ' 00:11:55.522 11:56:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:55.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.522 --rc genhtml_branch_coverage=1 00:11:55.522 --rc genhtml_function_coverage=1 00:11:55.522 --rc genhtml_legend=1 00:11:55.522 --rc geninfo_all_blocks=1 00:11:55.522 --rc geninfo_unexecuted_blocks=1 00:11:55.522 00:11:55.522 ' 00:11:55.522 11:56:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:55.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.522 --rc genhtml_branch_coverage=1 00:11:55.522 --rc genhtml_function_coverage=1 00:11:55.522 --rc genhtml_legend=1 00:11:55.522 --rc geninfo_all_blocks=1 00:11:55.522 --rc geninfo_unexecuted_blocks=1 00:11:55.522 00:11:55.522 ' 00:11:55.522 11:56:00 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:55.522 11:56:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.522 11:56:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.522 11:56:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.522 11:56:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.522 11:56:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.522 11:56:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.522 11:56:00 -- paths/export.sh@5 -- # export PATH 00:11:55.522 11:56:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.522 11:56:00 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:11:55.522 11:56:00 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:11:55.522 11:56:00 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:11:55.522 11:56:00 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:11:55.522 11:56:00 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:55.522 11:56:00 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:55.522 11:56:00 -- dd/posix.sh@130 -- # tests 00:11:55.522 11:56:00 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:11:55.522 * First test run, liburing in use 00:11:55.522 11:56:00 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:11:55.522 11:56:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:55.522 11:56:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:55.522 11:56:00 -- common/autotest_common.sh@10 -- # set +x 00:11:55.522 ************************************ 00:11:55.522 START TEST dd_flag_append 00:11:55.522 ************************************ 00:11:55.522 11:56:00 -- common/autotest_common.sh@1114 -- # append 00:11:55.522 11:56:00 -- dd/posix.sh@16 -- # local dump0 00:11:55.522 11:56:00 -- dd/posix.sh@17 -- # local dump1 00:11:55.522 11:56:00 -- dd/posix.sh@19 -- # gen_bytes 32 00:11:55.522 11:56:00 -- dd/common.sh@98 -- # xtrace_disable 00:11:55.522 11:56:00 -- common/autotest_common.sh@10 -- # set +x 00:11:55.522 11:56:00 -- dd/posix.sh@19 -- # dump0=z3a8oj4j65kwz1zb49mjfa3i400nu7vx 00:11:55.522 11:56:00 -- dd/posix.sh@20 -- # gen_bytes 32 00:11:55.522 11:56:00 -- dd/common.sh@98 -- # xtrace_disable 00:11:55.522 11:56:00 -- common/autotest_common.sh@10 -- # set +x 00:11:55.522 11:56:00 -- dd/posix.sh@20 -- # dump1=e8dlms5ux830pcve0auiocz1z27vf1md 00:11:55.522 11:56:00 -- dd/posix.sh@22 -- # printf %s z3a8oj4j65kwz1zb49mjfa3i400nu7vx 00:11:55.522 11:56:00 -- dd/posix.sh@23 -- # printf %s e8dlms5ux830pcve0auiocz1z27vf1md 00:11:55.522 11:56:00 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:11:55.522 [2024-11-29 11:56:01.002969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:55.522 [2024-11-29 11:56:01.003118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70206 ] 00:11:55.781 [2024-11-29 11:56:01.141338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.781 [2024-11-29 11:56:01.279255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.040  [2024-11-29T11:56:01.810Z] Copying: 32/32 [B] (average 31 kBps) 00:11:56.299 00:11:56.299 11:56:01 -- dd/posix.sh@27 -- # [[ e8dlms5ux830pcve0auiocz1z27vf1mdz3a8oj4j65kwz1zb49mjfa3i400nu7vx == \e\8\d\l\m\s\5\u\x\8\3\0\p\c\v\e\0\a\u\i\o\c\z\1\z\2\7\v\f\1\m\d\z\3\a\8\o\j\4\j\6\5\k\w\z\1\z\b\4\9\m\j\f\a\3\i\4\0\0\n\u\7\v\x ]] 00:11:56.299 00:11:56.299 real 0m0.793s 00:11:56.299 user 0m0.451s 00:11:56.299 sys 0m0.221s 00:11:56.299 11:56:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:56.299 11:56:01 -- common/autotest_common.sh@10 -- # set +x 00:11:56.299 ************************************ 00:11:56.299 END TEST dd_flag_append 00:11:56.299 ************************************ 00:11:56.299 11:56:01 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:11:56.299 11:56:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:56.299 11:56:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:56.299 11:56:01 -- common/autotest_common.sh@10 -- # set +x 00:11:56.299 ************************************ 00:11:56.299 START TEST dd_flag_directory 00:11:56.299 ************************************ 00:11:56.299 11:56:01 -- common/autotest_common.sh@1114 -- # directory 00:11:56.299 11:56:01 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:56.299 11:56:01 -- common/autotest_common.sh@650 -- # local es=0 00:11:56.299 11:56:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:56.299 11:56:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:56.299 11:56:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.299 11:56:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:56.299 11:56:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.299 11:56:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:56.299 11:56:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.299 11:56:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:56.299 11:56:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:56.299 11:56:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:56.557 [2024-11-29 11:56:01.837954] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:56.557 [2024-11-29 11:56:01.838072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70232 ] 00:11:56.557 [2024-11-29 11:56:01.972865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.816 [2024-11-29 11:56:02.111920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.816 [2024-11-29 11:56:02.242984] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:56.816 [2024-11-29 11:56:02.243078] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:56.816 [2024-11-29 11:56:02.243094] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:57.075 [2024-11-29 11:56:02.425507] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:11:57.075 11:56:02 -- common/autotest_common.sh@653 -- # es=236 00:11:57.075 11:56:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:57.075 11:56:02 -- common/autotest_common.sh@662 -- # es=108 00:11:57.075 11:56:02 -- common/autotest_common.sh@663 -- # case "$es" in 00:11:57.075 11:56:02 -- common/autotest_common.sh@670 -- # es=1 00:11:57.075 11:56:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:57.075 11:56:02 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:57.075 11:56:02 -- common/autotest_common.sh@650 -- # local es=0 00:11:57.076 11:56:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:57.076 11:56:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.076 11:56:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.076 11:56:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.076 11:56:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.076 11:56:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.076 11:56:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.076 11:56:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.076 11:56:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:57.076 11:56:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:57.335 [2024-11-29 11:56:02.617996] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:57.335 [2024-11-29 11:56:02.618128] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70242 ] 00:11:57.335 [2024-11-29 11:56:02.754946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.595 [2024-11-29 11:56:02.893255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.595 [2024-11-29 11:56:03.022611] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:57.595 [2024-11-29 11:56:03.022711] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:57.595 [2024-11-29 11:56:03.022727] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:57.853 [2024-11-29 11:56:03.200843] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:11:57.853 11:56:03 -- common/autotest_common.sh@653 -- # es=236 00:11:57.853 11:56:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:57.853 11:56:03 -- common/autotest_common.sh@662 -- # es=108 00:11:57.853 11:56:03 -- common/autotest_common.sh@663 -- # case "$es" in 00:11:57.853 11:56:03 -- common/autotest_common.sh@670 -- # es=1 00:11:57.853 11:56:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:57.853 00:11:57.853 real 0m1.548s 00:11:57.853 user 0m0.922s 00:11:57.853 sys 0m0.412s 00:11:57.853 11:56:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:57.853 11:56:03 -- common/autotest_common.sh@10 -- # set +x 00:11:57.853 ************************************ 00:11:57.853 END TEST dd_flag_directory 00:11:57.853 ************************************ 00:11:58.161 11:56:03 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:11:58.161 11:56:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:58.161 11:56:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:58.161 11:56:03 -- common/autotest_common.sh@10 -- # set +x 00:11:58.161 ************************************ 00:11:58.161 START TEST dd_flag_nofollow 00:11:58.161 ************************************ 00:11:58.161 11:56:03 -- common/autotest_common.sh@1114 -- # nofollow 00:11:58.161 11:56:03 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:58.161 11:56:03 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:58.161 11:56:03 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:58.161 11:56:03 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:58.161 11:56:03 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:58.161 11:56:03 -- common/autotest_common.sh@650 -- # local es=0 00:11:58.161 11:56:03 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:58.161 11:56:03 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.161 11:56:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.161 11:56:03 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.161 11:56:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.161 11:56:03 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.161 11:56:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.161 11:56:03 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.161 11:56:03 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:58.161 11:56:03 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:58.161 [2024-11-29 11:56:03.458446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:58.161 [2024-11-29 11:56:03.458608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70276 ] 00:11:58.161 [2024-11-29 11:56:03.600842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.421 [2024-11-29 11:56:03.741020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.421 [2024-11-29 11:56:03.875452] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:58.421 [2024-11-29 11:56:03.875528] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:58.421 [2024-11-29 11:56:03.875546] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:58.679 [2024-11-29 11:56:04.069943] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:11:58.979 11:56:04 -- common/autotest_common.sh@653 -- # es=216 00:11:58.979 11:56:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:58.979 11:56:04 -- common/autotest_common.sh@662 -- # es=88 00:11:58.979 11:56:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:11:58.979 11:56:04 -- common/autotest_common.sh@670 -- # es=1 00:11:58.979 11:56:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:58.979 11:56:04 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:58.979 11:56:04 -- common/autotest_common.sh@650 -- # local es=0 00:11:58.979 11:56:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:58.979 11:56:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.979 11:56:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.979 11:56:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.979 11:56:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.979 11:56:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.979 11:56:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.979 11:56:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.979 11:56:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:58.979 11:56:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:58.979 [2024-11-29 11:56:04.259753] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:58.979 [2024-11-29 11:56:04.259906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70291 ] 00:11:58.979 [2024-11-29 11:56:04.400865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.238 [2024-11-29 11:56:04.538139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.238 [2024-11-29 11:56:04.664902] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:59.238 [2024-11-29 11:56:04.665017] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:59.238 [2024-11-29 11:56:04.665033] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:59.497 [2024-11-29 11:56:04.835338] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:11:59.497 11:56:04 -- common/autotest_common.sh@653 -- # es=216 00:11:59.497 11:56:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:59.497 11:56:04 -- common/autotest_common.sh@662 -- # es=88 00:11:59.497 11:56:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:11:59.497 11:56:04 -- common/autotest_common.sh@670 -- # es=1 00:11:59.497 11:56:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:59.497 11:56:04 -- dd/posix.sh@46 -- # gen_bytes 512 00:11:59.497 11:56:04 -- dd/common.sh@98 -- # xtrace_disable 00:11:59.497 11:56:04 -- common/autotest_common.sh@10 -- # set +x 00:11:59.497 11:56:04 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:59.756 [2024-11-29 11:56:05.024181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:59.756 [2024-11-29 11:56:05.024302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70304 ] 00:11:59.756 [2024-11-29 11:56:05.157412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.014 [2024-11-29 11:56:05.293178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.014  [2024-11-29T11:56:05.785Z] Copying: 512/512 [B] (average 500 kBps) 00:12:00.274 00:12:00.274 11:56:05 -- dd/posix.sh@49 -- # [[ cb0laimkj0wtna50h6197vhi3asiyx407manwzo85m6o2lfq9y7z8hpce4sm7xgz8jg3255suxz8qat6cx8yqr3va6nkfjbafq3bdie9iux5yxtohapo37bt9csh496roxop3596ha18vk1oqingp5hyrauy03guyr9yho30qf6gfihak10djyqqj8q96he9vikfnw3vv4cpdwoxn9gj890p3w1djshgouo1skxkx0xf5ny6s9lfelfscnuwn3we34cuvwvn1zmgyuhebwh5lcni7seqfw0xtpg5ebboqkg2ezdm8zrpjx5gsrifeuua0m59ctzrhnuxps6gm7app1hic2zrrbcdis9u3dw2a6noupet1hhlky30yzcupta63vxlwojwnwfuu0hrts0nvf6y6823310un8hi22vtmdgv153dbg6zkn2fstlqtnimu3760bztw4f6akxhh294orobz7vd2irb7mdv5mxdrklz78ckf70htse4jfhz1qwi == \c\b\0\l\a\i\m\k\j\0\w\t\n\a\5\0\h\6\1\9\7\v\h\i\3\a\s\i\y\x\4\0\7\m\a\n\w\z\o\8\5\m\6\o\2\l\f\q\9\y\7\z\8\h\p\c\e\4\s\m\7\x\g\z\8\j\g\3\2\5\5\s\u\x\z\8\q\a\t\6\c\x\8\y\q\r\3\v\a\6\n\k\f\j\b\a\f\q\3\b\d\i\e\9\i\u\x\5\y\x\t\o\h\a\p\o\3\7\b\t\9\c\s\h\4\9\6\r\o\x\o\p\3\5\9\6\h\a\1\8\v\k\1\o\q\i\n\g\p\5\h\y\r\a\u\y\0\3\g\u\y\r\9\y\h\o\3\0\q\f\6\g\f\i\h\a\k\1\0\d\j\y\q\q\j\8\q\9\6\h\e\9\v\i\k\f\n\w\3\v\v\4\c\p\d\w\o\x\n\9\g\j\8\9\0\p\3\w\1\d\j\s\h\g\o\u\o\1\s\k\x\k\x\0\x\f\5\n\y\6\s\9\l\f\e\l\f\s\c\n\u\w\n\3\w\e\3\4\c\u\v\w\v\n\1\z\m\g\y\u\h\e\b\w\h\5\l\c\n\i\7\s\e\q\f\w\0\x\t\p\g\5\e\b\b\o\q\k\g\2\e\z\d\m\8\z\r\p\j\x\5\g\s\r\i\f\e\u\u\a\0\m\5\9\c\t\z\r\h\n\u\x\p\s\6\g\m\7\a\p\p\1\h\i\c\2\z\r\r\b\c\d\i\s\9\u\3\d\w\2\a\6\n\o\u\p\e\t\1\h\h\l\k\y\3\0\y\z\c\u\p\t\a\6\3\v\x\l\w\o\j\w\n\w\f\u\u\0\h\r\t\s\0\n\v\f\6\y\6\8\2\3\3\1\0\u\n\8\h\i\2\2\v\t\m\d\g\v\1\5\3\d\b\g\6\z\k\n\2\f\s\t\l\q\t\n\i\m\u\3\7\6\0\b\z\t\w\4\f\6\a\k\x\h\h\2\9\4\o\r\o\b\z\7\v\d\2\i\r\b\7\m\d\v\5\m\x\d\r\k\l\z\7\8\c\k\f\7\0\h\t\s\e\4\j\f\h\z\1\q\w\i ]] 00:12:00.274 00:12:00.274 real 0m2.357s 00:12:00.274 user 0m1.397s 00:12:00.274 sys 0m0.621s 00:12:00.274 11:56:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:00.274 ************************************ 00:12:00.274 END TEST dd_flag_nofollow 00:12:00.274 ************************************ 00:12:00.274 11:56:05 -- common/autotest_common.sh@10 -- # set +x 00:12:00.534 11:56:05 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:12:00.534 11:56:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:00.534 11:56:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:00.534 11:56:05 -- common/autotest_common.sh@10 -- # set +x 00:12:00.534 ************************************ 00:12:00.534 START TEST dd_flag_noatime 00:12:00.534 ************************************ 00:12:00.534 11:56:05 -- common/autotest_common.sh@1114 -- # noatime 00:12:00.534 11:56:05 -- dd/posix.sh@53 -- # local atime_if 00:12:00.534 11:56:05 -- dd/posix.sh@54 -- # local atime_of 00:12:00.534 11:56:05 -- dd/posix.sh@58 -- # gen_bytes 512 00:12:00.534 11:56:05 -- dd/common.sh@98 -- # xtrace_disable 00:12:00.534 11:56:05 -- common/autotest_common.sh@10 -- # set +x 00:12:00.534 11:56:05 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:00.534 11:56:05 -- dd/posix.sh@60 -- # atime_if=1732881365 00:12:00.534 11:56:05 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:00.534 11:56:05 -- dd/posix.sh@61 -- # atime_of=1732881365 00:12:00.534 11:56:05 -- dd/posix.sh@66 -- # sleep 1 00:12:01.474 11:56:06 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:01.474 [2024-11-29 11:56:06.876597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:01.474 [2024-11-29 11:56:06.876729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70339 ] 00:12:01.733 [2024-11-29 11:56:07.012535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.733 [2024-11-29 11:56:07.138008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.991  [2024-11-29T11:56:07.761Z] Copying: 512/512 [B] (average 500 kBps) 00:12:02.250 00:12:02.250 11:56:07 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:02.250 11:56:07 -- dd/posix.sh@69 -- # (( atime_if == 1732881365 )) 00:12:02.250 11:56:07 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:02.250 11:56:07 -- dd/posix.sh@70 -- # (( atime_of == 1732881365 )) 00:12:02.250 11:56:07 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:02.250 [2024-11-29 11:56:07.621422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:02.250 [2024-11-29 11:56:07.621625] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70356 ] 00:12:02.250 [2024-11-29 11:56:07.757498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.508 [2024-11-29 11:56:07.889876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.508  [2024-11-29T11:56:08.587Z] Copying: 512/512 [B] (average 500 kBps) 00:12:03.076 00:12:03.076 11:56:08 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:03.076 11:56:08 -- dd/posix.sh@73 -- # (( atime_if < 1732881368 )) 00:12:03.076 00:12:03.076 real 0m2.549s 00:12:03.076 user 0m0.909s 00:12:03.076 sys 0m0.397s 00:12:03.076 11:56:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:03.076 11:56:08 -- common/autotest_common.sh@10 -- # set +x 00:12:03.076 ************************************ 00:12:03.076 END TEST dd_flag_noatime 00:12:03.076 ************************************ 00:12:03.076 11:56:08 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:12:03.076 11:56:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:03.076 11:56:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:03.076 11:56:08 -- common/autotest_common.sh@10 -- # set +x 00:12:03.076 ************************************ 00:12:03.076 START TEST dd_flags_misc 00:12:03.076 ************************************ 00:12:03.076 11:56:08 -- common/autotest_common.sh@1114 -- # io 00:12:03.076 11:56:08 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:12:03.076 11:56:08 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:12:03.076 11:56:08 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:12:03.076 11:56:08 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:12:03.076 11:56:08 -- dd/posix.sh@86 -- # gen_bytes 512 00:12:03.076 11:56:08 -- dd/common.sh@98 -- # xtrace_disable 00:12:03.076 11:56:08 -- common/autotest_common.sh@10 -- # set +x 00:12:03.076 11:56:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:03.076 11:56:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:12:03.076 [2024-11-29 11:56:08.461709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:03.076 [2024-11-29 11:56:08.461827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70388 ] 00:12:03.334 [2024-11-29 11:56:08.601223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.334 [2024-11-29 11:56:08.728120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.593  [2024-11-29T11:56:09.377Z] Copying: 512/512 [B] (average 500 kBps) 00:12:03.866 00:12:03.866 11:56:09 -- dd/posix.sh@93 -- # [[ pff52lvcs0oezmmxew3mf2yp2m5qltnjadmdl71w168mi66i3e9cgd7f33sobnqvheu3ilc12f4bajbkxnx6a97svlj3ikk98z5k8hsgi3l5xpjn8z3i4flx6p0nuxxdsip8601fjtd49sn3i1f5kals69lygxjg61yfr39v48vonz9dt317yo6yvols75rgci7mwlrh4py0xke0nkt4o1vfy7pb7zybzfpuyjnum8eq2bgbiurdjjd9fulmzauua4qp1o88e1zasbilx6a1dg84skj4eb2r18u8semwhjz19guco30i8jew1nx0i4gjgyg2xn6j9f27a8b9or9phd4gdyrnsuojtocgnh054gczorltsbrjmvzib3w3av9pvbzn6e1g0mnnz6s101lmsf2cp70ktjuz8pe56vaypk6tcv8fvlhgbyq7glg71jt2ba1o0j6610fvj2yuujnmx5d4sskms3idbl1hwz1anzp3l7wsq51du75r42ajpts1 == \p\f\f\5\2\l\v\c\s\0\o\e\z\m\m\x\e\w\3\m\f\2\y\p\2\m\5\q\l\t\n\j\a\d\m\d\l\7\1\w\1\6\8\m\i\6\6\i\3\e\9\c\g\d\7\f\3\3\s\o\b\n\q\v\h\e\u\3\i\l\c\1\2\f\4\b\a\j\b\k\x\n\x\6\a\9\7\s\v\l\j\3\i\k\k\9\8\z\5\k\8\h\s\g\i\3\l\5\x\p\j\n\8\z\3\i\4\f\l\x\6\p\0\n\u\x\x\d\s\i\p\8\6\0\1\f\j\t\d\4\9\s\n\3\i\1\f\5\k\a\l\s\6\9\l\y\g\x\j\g\6\1\y\f\r\3\9\v\4\8\v\o\n\z\9\d\t\3\1\7\y\o\6\y\v\o\l\s\7\5\r\g\c\i\7\m\w\l\r\h\4\p\y\0\x\k\e\0\n\k\t\4\o\1\v\f\y\7\p\b\7\z\y\b\z\f\p\u\y\j\n\u\m\8\e\q\2\b\g\b\i\u\r\d\j\j\d\9\f\u\l\m\z\a\u\u\a\4\q\p\1\o\8\8\e\1\z\a\s\b\i\l\x\6\a\1\d\g\8\4\s\k\j\4\e\b\2\r\1\8\u\8\s\e\m\w\h\j\z\1\9\g\u\c\o\3\0\i\8\j\e\w\1\n\x\0\i\4\g\j\g\y\g\2\x\n\6\j\9\f\2\7\a\8\b\9\o\r\9\p\h\d\4\g\d\y\r\n\s\u\o\j\t\o\c\g\n\h\0\5\4\g\c\z\o\r\l\t\s\b\r\j\m\v\z\i\b\3\w\3\a\v\9\p\v\b\z\n\6\e\1\g\0\m\n\n\z\6\s\1\0\1\l\m\s\f\2\c\p\7\0\k\t\j\u\z\8\p\e\5\6\v\a\y\p\k\6\t\c\v\8\f\v\l\h\g\b\y\q\7\g\l\g\7\1\j\t\2\b\a\1\o\0\j\6\6\1\0\f\v\j\2\y\u\u\j\n\m\x\5\d\4\s\s\k\m\s\3\i\d\b\l\1\h\w\z\1\a\n\z\p\3\l\7\w\s\q\5\1\d\u\7\5\r\4\2\a\j\p\t\s\1 ]] 00:12:03.866 11:56:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:03.866 11:56:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:12:03.866 [2024-11-29 11:56:09.220710] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:03.866 [2024-11-29 11:56:09.220839] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70400 ] 00:12:03.866 [2024-11-29 11:56:09.360597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.150 [2024-11-29 11:56:09.491642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.150  [2024-11-29T11:56:10.228Z] Copying: 512/512 [B] (average 500 kBps) 00:12:04.717 00:12:04.717 11:56:09 -- dd/posix.sh@93 -- # [[ pff52lvcs0oezmmxew3mf2yp2m5qltnjadmdl71w168mi66i3e9cgd7f33sobnqvheu3ilc12f4bajbkxnx6a97svlj3ikk98z5k8hsgi3l5xpjn8z3i4flx6p0nuxxdsip8601fjtd49sn3i1f5kals69lygxjg61yfr39v48vonz9dt317yo6yvols75rgci7mwlrh4py0xke0nkt4o1vfy7pb7zybzfpuyjnum8eq2bgbiurdjjd9fulmzauua4qp1o88e1zasbilx6a1dg84skj4eb2r18u8semwhjz19guco30i8jew1nx0i4gjgyg2xn6j9f27a8b9or9phd4gdyrnsuojtocgnh054gczorltsbrjmvzib3w3av9pvbzn6e1g0mnnz6s101lmsf2cp70ktjuz8pe56vaypk6tcv8fvlhgbyq7glg71jt2ba1o0j6610fvj2yuujnmx5d4sskms3idbl1hwz1anzp3l7wsq51du75r42ajpts1 == \p\f\f\5\2\l\v\c\s\0\o\e\z\m\m\x\e\w\3\m\f\2\y\p\2\m\5\q\l\t\n\j\a\d\m\d\l\7\1\w\1\6\8\m\i\6\6\i\3\e\9\c\g\d\7\f\3\3\s\o\b\n\q\v\h\e\u\3\i\l\c\1\2\f\4\b\a\j\b\k\x\n\x\6\a\9\7\s\v\l\j\3\i\k\k\9\8\z\5\k\8\h\s\g\i\3\l\5\x\p\j\n\8\z\3\i\4\f\l\x\6\p\0\n\u\x\x\d\s\i\p\8\6\0\1\f\j\t\d\4\9\s\n\3\i\1\f\5\k\a\l\s\6\9\l\y\g\x\j\g\6\1\y\f\r\3\9\v\4\8\v\o\n\z\9\d\t\3\1\7\y\o\6\y\v\o\l\s\7\5\r\g\c\i\7\m\w\l\r\h\4\p\y\0\x\k\e\0\n\k\t\4\o\1\v\f\y\7\p\b\7\z\y\b\z\f\p\u\y\j\n\u\m\8\e\q\2\b\g\b\i\u\r\d\j\j\d\9\f\u\l\m\z\a\u\u\a\4\q\p\1\o\8\8\e\1\z\a\s\b\i\l\x\6\a\1\d\g\8\4\s\k\j\4\e\b\2\r\1\8\u\8\s\e\m\w\h\j\z\1\9\g\u\c\o\3\0\i\8\j\e\w\1\n\x\0\i\4\g\j\g\y\g\2\x\n\6\j\9\f\2\7\a\8\b\9\o\r\9\p\h\d\4\g\d\y\r\n\s\u\o\j\t\o\c\g\n\h\0\5\4\g\c\z\o\r\l\t\s\b\r\j\m\v\z\i\b\3\w\3\a\v\9\p\v\b\z\n\6\e\1\g\0\m\n\n\z\6\s\1\0\1\l\m\s\f\2\c\p\7\0\k\t\j\u\z\8\p\e\5\6\v\a\y\p\k\6\t\c\v\8\f\v\l\h\g\b\y\q\7\g\l\g\7\1\j\t\2\b\a\1\o\0\j\6\6\1\0\f\v\j\2\y\u\u\j\n\m\x\5\d\4\s\s\k\m\s\3\i\d\b\l\1\h\w\z\1\a\n\z\p\3\l\7\w\s\q\5\1\d\u\7\5\r\4\2\a\j\p\t\s\1 ]] 00:12:04.717 11:56:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:04.717 11:56:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:12:04.717 [2024-11-29 11:56:10.007500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:04.717 [2024-11-29 11:56:10.007609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70403 ] 00:12:04.717 [2024-11-29 11:56:10.143862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.976 [2024-11-29 11:56:10.275360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.976  [2024-11-29T11:56:10.746Z] Copying: 512/512 [B] (average 166 kBps) 00:12:05.235 00:12:05.235 11:56:10 -- dd/posix.sh@93 -- # [[ pff52lvcs0oezmmxew3mf2yp2m5qltnjadmdl71w168mi66i3e9cgd7f33sobnqvheu3ilc12f4bajbkxnx6a97svlj3ikk98z5k8hsgi3l5xpjn8z3i4flx6p0nuxxdsip8601fjtd49sn3i1f5kals69lygxjg61yfr39v48vonz9dt317yo6yvols75rgci7mwlrh4py0xke0nkt4o1vfy7pb7zybzfpuyjnum8eq2bgbiurdjjd9fulmzauua4qp1o88e1zasbilx6a1dg84skj4eb2r18u8semwhjz19guco30i8jew1nx0i4gjgyg2xn6j9f27a8b9or9phd4gdyrnsuojtocgnh054gczorltsbrjmvzib3w3av9pvbzn6e1g0mnnz6s101lmsf2cp70ktjuz8pe56vaypk6tcv8fvlhgbyq7glg71jt2ba1o0j6610fvj2yuujnmx5d4sskms3idbl1hwz1anzp3l7wsq51du75r42ajpts1 == \p\f\f\5\2\l\v\c\s\0\o\e\z\m\m\x\e\w\3\m\f\2\y\p\2\m\5\q\l\t\n\j\a\d\m\d\l\7\1\w\1\6\8\m\i\6\6\i\3\e\9\c\g\d\7\f\3\3\s\o\b\n\q\v\h\e\u\3\i\l\c\1\2\f\4\b\a\j\b\k\x\n\x\6\a\9\7\s\v\l\j\3\i\k\k\9\8\z\5\k\8\h\s\g\i\3\l\5\x\p\j\n\8\z\3\i\4\f\l\x\6\p\0\n\u\x\x\d\s\i\p\8\6\0\1\f\j\t\d\4\9\s\n\3\i\1\f\5\k\a\l\s\6\9\l\y\g\x\j\g\6\1\y\f\r\3\9\v\4\8\v\o\n\z\9\d\t\3\1\7\y\o\6\y\v\o\l\s\7\5\r\g\c\i\7\m\w\l\r\h\4\p\y\0\x\k\e\0\n\k\t\4\o\1\v\f\y\7\p\b\7\z\y\b\z\f\p\u\y\j\n\u\m\8\e\q\2\b\g\b\i\u\r\d\j\j\d\9\f\u\l\m\z\a\u\u\a\4\q\p\1\o\8\8\e\1\z\a\s\b\i\l\x\6\a\1\d\g\8\4\s\k\j\4\e\b\2\r\1\8\u\8\s\e\m\w\h\j\z\1\9\g\u\c\o\3\0\i\8\j\e\w\1\n\x\0\i\4\g\j\g\y\g\2\x\n\6\j\9\f\2\7\a\8\b\9\o\r\9\p\h\d\4\g\d\y\r\n\s\u\o\j\t\o\c\g\n\h\0\5\4\g\c\z\o\r\l\t\s\b\r\j\m\v\z\i\b\3\w\3\a\v\9\p\v\b\z\n\6\e\1\g\0\m\n\n\z\6\s\1\0\1\l\m\s\f\2\c\p\7\0\k\t\j\u\z\8\p\e\5\6\v\a\y\p\k\6\t\c\v\8\f\v\l\h\g\b\y\q\7\g\l\g\7\1\j\t\2\b\a\1\o\0\j\6\6\1\0\f\v\j\2\y\u\u\j\n\m\x\5\d\4\s\s\k\m\s\3\i\d\b\l\1\h\w\z\1\a\n\z\p\3\l\7\w\s\q\5\1\d\u\7\5\r\4\2\a\j\p\t\s\1 ]] 00:12:05.235 11:56:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:05.235 11:56:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:12:05.494 [2024-11-29 11:56:10.762000] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:05.494 [2024-11-29 11:56:10.762129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70416 ] 00:12:05.494 [2024-11-29 11:56:10.895425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.753 [2024-11-29 11:56:11.018223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.753  [2024-11-29T11:56:11.523Z] Copying: 512/512 [B] (average 250 kBps) 00:12:06.012 00:12:06.012 11:56:11 -- dd/posix.sh@93 -- # [[ pff52lvcs0oezmmxew3mf2yp2m5qltnjadmdl71w168mi66i3e9cgd7f33sobnqvheu3ilc12f4bajbkxnx6a97svlj3ikk98z5k8hsgi3l5xpjn8z3i4flx6p0nuxxdsip8601fjtd49sn3i1f5kals69lygxjg61yfr39v48vonz9dt317yo6yvols75rgci7mwlrh4py0xke0nkt4o1vfy7pb7zybzfpuyjnum8eq2bgbiurdjjd9fulmzauua4qp1o88e1zasbilx6a1dg84skj4eb2r18u8semwhjz19guco30i8jew1nx0i4gjgyg2xn6j9f27a8b9or9phd4gdyrnsuojtocgnh054gczorltsbrjmvzib3w3av9pvbzn6e1g0mnnz6s101lmsf2cp70ktjuz8pe56vaypk6tcv8fvlhgbyq7glg71jt2ba1o0j6610fvj2yuujnmx5d4sskms3idbl1hwz1anzp3l7wsq51du75r42ajpts1 == \p\f\f\5\2\l\v\c\s\0\o\e\z\m\m\x\e\w\3\m\f\2\y\p\2\m\5\q\l\t\n\j\a\d\m\d\l\7\1\w\1\6\8\m\i\6\6\i\3\e\9\c\g\d\7\f\3\3\s\o\b\n\q\v\h\e\u\3\i\l\c\1\2\f\4\b\a\j\b\k\x\n\x\6\a\9\7\s\v\l\j\3\i\k\k\9\8\z\5\k\8\h\s\g\i\3\l\5\x\p\j\n\8\z\3\i\4\f\l\x\6\p\0\n\u\x\x\d\s\i\p\8\6\0\1\f\j\t\d\4\9\s\n\3\i\1\f\5\k\a\l\s\6\9\l\y\g\x\j\g\6\1\y\f\r\3\9\v\4\8\v\o\n\z\9\d\t\3\1\7\y\o\6\y\v\o\l\s\7\5\r\g\c\i\7\m\w\l\r\h\4\p\y\0\x\k\e\0\n\k\t\4\o\1\v\f\y\7\p\b\7\z\y\b\z\f\p\u\y\j\n\u\m\8\e\q\2\b\g\b\i\u\r\d\j\j\d\9\f\u\l\m\z\a\u\u\a\4\q\p\1\o\8\8\e\1\z\a\s\b\i\l\x\6\a\1\d\g\8\4\s\k\j\4\e\b\2\r\1\8\u\8\s\e\m\w\h\j\z\1\9\g\u\c\o\3\0\i\8\j\e\w\1\n\x\0\i\4\g\j\g\y\g\2\x\n\6\j\9\f\2\7\a\8\b\9\o\r\9\p\h\d\4\g\d\y\r\n\s\u\o\j\t\o\c\g\n\h\0\5\4\g\c\z\o\r\l\t\s\b\r\j\m\v\z\i\b\3\w\3\a\v\9\p\v\b\z\n\6\e\1\g\0\m\n\n\z\6\s\1\0\1\l\m\s\f\2\c\p\7\0\k\t\j\u\z\8\p\e\5\6\v\a\y\p\k\6\t\c\v\8\f\v\l\h\g\b\y\q\7\g\l\g\7\1\j\t\2\b\a\1\o\0\j\6\6\1\0\f\v\j\2\y\u\u\j\n\m\x\5\d\4\s\s\k\m\s\3\i\d\b\l\1\h\w\z\1\a\n\z\p\3\l\7\w\s\q\5\1\d\u\7\5\r\4\2\a\j\p\t\s\1 ]] 00:12:06.012 11:56:11 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:12:06.012 11:56:11 -- dd/posix.sh@86 -- # gen_bytes 512 00:12:06.012 11:56:11 -- dd/common.sh@98 -- # xtrace_disable 00:12:06.012 11:56:11 -- common/autotest_common.sh@10 -- # set +x 00:12:06.012 11:56:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:06.012 11:56:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:12:06.271 [2024-11-29 11:56:11.544921] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:06.271 [2024-11-29 11:56:11.545106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70429 ] 00:12:06.271 [2024-11-29 11:56:11.692841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.530 [2024-11-29 11:56:11.814120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.530  [2024-11-29T11:56:12.299Z] Copying: 512/512 [B] (average 500 kBps) 00:12:06.788 00:12:06.789 11:56:12 -- dd/posix.sh@93 -- # [[ mhswj0kd07xjd53v7bppmu3t5aaky2uweuvatg2rz8vqxzw88r1kl3o0zduk8ha36t05v1tfrq4eegz9eyxdm1ltjhw9itt7nnd4ye8ydioxv71q7914b5q917b7oxyvej2xzk9fsmvom1h1d7nsadhmoma6sbznquunzdy3vg7dtqcb5dla3lo7jd41qxydguu3s4mw4zs5xofmuh43xdca4sc7qrtru0fklbrhivdm0vm59xfpk3k9fsjkhs5w2wace31xlz54xerpa3p8a8zf456a10xncgfq7o27zbeaujmcw46r6uu9qhhr7dkwl8zlwq60mu2wsdh3l53yo0nfnnvag47mvqywyingo1u16r22lsrktpjr2yebm8u1qpjt4t4ra1v2731alhbvv6e9ay9vuodd3g6c34zolyrrnga7004r5u75jkp5s33mj06nrl84hyndg7nqveuzcgfm5lx0r85fy3jcsokwqvnejtsz2p3n2x6vija6cw2d == \m\h\s\w\j\0\k\d\0\7\x\j\d\5\3\v\7\b\p\p\m\u\3\t\5\a\a\k\y\2\u\w\e\u\v\a\t\g\2\r\z\8\v\q\x\z\w\8\8\r\1\k\l\3\o\0\z\d\u\k\8\h\a\3\6\t\0\5\v\1\t\f\r\q\4\e\e\g\z\9\e\y\x\d\m\1\l\t\j\h\w\9\i\t\t\7\n\n\d\4\y\e\8\y\d\i\o\x\v\7\1\q\7\9\1\4\b\5\q\9\1\7\b\7\o\x\y\v\e\j\2\x\z\k\9\f\s\m\v\o\m\1\h\1\d\7\n\s\a\d\h\m\o\m\a\6\s\b\z\n\q\u\u\n\z\d\y\3\v\g\7\d\t\q\c\b\5\d\l\a\3\l\o\7\j\d\4\1\q\x\y\d\g\u\u\3\s\4\m\w\4\z\s\5\x\o\f\m\u\h\4\3\x\d\c\a\4\s\c\7\q\r\t\r\u\0\f\k\l\b\r\h\i\v\d\m\0\v\m\5\9\x\f\p\k\3\k\9\f\s\j\k\h\s\5\w\2\w\a\c\e\3\1\x\l\z\5\4\x\e\r\p\a\3\p\8\a\8\z\f\4\5\6\a\1\0\x\n\c\g\f\q\7\o\2\7\z\b\e\a\u\j\m\c\w\4\6\r\6\u\u\9\q\h\h\r\7\d\k\w\l\8\z\l\w\q\6\0\m\u\2\w\s\d\h\3\l\5\3\y\o\0\n\f\n\n\v\a\g\4\7\m\v\q\y\w\y\i\n\g\o\1\u\1\6\r\2\2\l\s\r\k\t\p\j\r\2\y\e\b\m\8\u\1\q\p\j\t\4\t\4\r\a\1\v\2\7\3\1\a\l\h\b\v\v\6\e\9\a\y\9\v\u\o\d\d\3\g\6\c\3\4\z\o\l\y\r\r\n\g\a\7\0\0\4\r\5\u\7\5\j\k\p\5\s\3\3\m\j\0\6\n\r\l\8\4\h\y\n\d\g\7\n\q\v\e\u\z\c\g\f\m\5\l\x\0\r\8\5\f\y\3\j\c\s\o\k\w\q\v\n\e\j\t\s\z\2\p\3\n\2\x\6\v\i\j\a\6\c\w\2\d ]] 00:12:06.789 11:56:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:06.789 11:56:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:12:06.789 [2024-11-29 11:56:12.295331] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:06.789 [2024-11-29 11:56:12.295441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70437 ] 00:12:07.047 [2024-11-29 11:56:12.429143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.047 [2024-11-29 11:56:12.551388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.306  [2024-11-29T11:56:13.075Z] Copying: 512/512 [B] (average 500 kBps) 00:12:07.564 00:12:07.565 11:56:12 -- dd/posix.sh@93 -- # [[ mhswj0kd07xjd53v7bppmu3t5aaky2uweuvatg2rz8vqxzw88r1kl3o0zduk8ha36t05v1tfrq4eegz9eyxdm1ltjhw9itt7nnd4ye8ydioxv71q7914b5q917b7oxyvej2xzk9fsmvom1h1d7nsadhmoma6sbznquunzdy3vg7dtqcb5dla3lo7jd41qxydguu3s4mw4zs5xofmuh43xdca4sc7qrtru0fklbrhivdm0vm59xfpk3k9fsjkhs5w2wace31xlz54xerpa3p8a8zf456a10xncgfq7o27zbeaujmcw46r6uu9qhhr7dkwl8zlwq60mu2wsdh3l53yo0nfnnvag47mvqywyingo1u16r22lsrktpjr2yebm8u1qpjt4t4ra1v2731alhbvv6e9ay9vuodd3g6c34zolyrrnga7004r5u75jkp5s33mj06nrl84hyndg7nqveuzcgfm5lx0r85fy3jcsokwqvnejtsz2p3n2x6vija6cw2d == \m\h\s\w\j\0\k\d\0\7\x\j\d\5\3\v\7\b\p\p\m\u\3\t\5\a\a\k\y\2\u\w\e\u\v\a\t\g\2\r\z\8\v\q\x\z\w\8\8\r\1\k\l\3\o\0\z\d\u\k\8\h\a\3\6\t\0\5\v\1\t\f\r\q\4\e\e\g\z\9\e\y\x\d\m\1\l\t\j\h\w\9\i\t\t\7\n\n\d\4\y\e\8\y\d\i\o\x\v\7\1\q\7\9\1\4\b\5\q\9\1\7\b\7\o\x\y\v\e\j\2\x\z\k\9\f\s\m\v\o\m\1\h\1\d\7\n\s\a\d\h\m\o\m\a\6\s\b\z\n\q\u\u\n\z\d\y\3\v\g\7\d\t\q\c\b\5\d\l\a\3\l\o\7\j\d\4\1\q\x\y\d\g\u\u\3\s\4\m\w\4\z\s\5\x\o\f\m\u\h\4\3\x\d\c\a\4\s\c\7\q\r\t\r\u\0\f\k\l\b\r\h\i\v\d\m\0\v\m\5\9\x\f\p\k\3\k\9\f\s\j\k\h\s\5\w\2\w\a\c\e\3\1\x\l\z\5\4\x\e\r\p\a\3\p\8\a\8\z\f\4\5\6\a\1\0\x\n\c\g\f\q\7\o\2\7\z\b\e\a\u\j\m\c\w\4\6\r\6\u\u\9\q\h\h\r\7\d\k\w\l\8\z\l\w\q\6\0\m\u\2\w\s\d\h\3\l\5\3\y\o\0\n\f\n\n\v\a\g\4\7\m\v\q\y\w\y\i\n\g\o\1\u\1\6\r\2\2\l\s\r\k\t\p\j\r\2\y\e\b\m\8\u\1\q\p\j\t\4\t\4\r\a\1\v\2\7\3\1\a\l\h\b\v\v\6\e\9\a\y\9\v\u\o\d\d\3\g\6\c\3\4\z\o\l\y\r\r\n\g\a\7\0\0\4\r\5\u\7\5\j\k\p\5\s\3\3\m\j\0\6\n\r\l\8\4\h\y\n\d\g\7\n\q\v\e\u\z\c\g\f\m\5\l\x\0\r\8\5\f\y\3\j\c\s\o\k\w\q\v\n\e\j\t\s\z\2\p\3\n\2\x\6\v\i\j\a\6\c\w\2\d ]] 00:12:07.565 11:56:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:07.565 11:56:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:12:07.565 [2024-11-29 11:56:13.037576] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:07.565 [2024-11-29 11:56:13.037707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70444 ] 00:12:07.824 [2024-11-29 11:56:13.176278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.824 [2024-11-29 11:56:13.300697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.082  [2024-11-29T11:56:13.853Z] Copying: 512/512 [B] (average 166 kBps) 00:12:08.342 00:12:08.342 11:56:13 -- dd/posix.sh@93 -- # [[ mhswj0kd07xjd53v7bppmu3t5aaky2uweuvatg2rz8vqxzw88r1kl3o0zduk8ha36t05v1tfrq4eegz9eyxdm1ltjhw9itt7nnd4ye8ydioxv71q7914b5q917b7oxyvej2xzk9fsmvom1h1d7nsadhmoma6sbznquunzdy3vg7dtqcb5dla3lo7jd41qxydguu3s4mw4zs5xofmuh43xdca4sc7qrtru0fklbrhivdm0vm59xfpk3k9fsjkhs5w2wace31xlz54xerpa3p8a8zf456a10xncgfq7o27zbeaujmcw46r6uu9qhhr7dkwl8zlwq60mu2wsdh3l53yo0nfnnvag47mvqywyingo1u16r22lsrktpjr2yebm8u1qpjt4t4ra1v2731alhbvv6e9ay9vuodd3g6c34zolyrrnga7004r5u75jkp5s33mj06nrl84hyndg7nqveuzcgfm5lx0r85fy3jcsokwqvnejtsz2p3n2x6vija6cw2d == \m\h\s\w\j\0\k\d\0\7\x\j\d\5\3\v\7\b\p\p\m\u\3\t\5\a\a\k\y\2\u\w\e\u\v\a\t\g\2\r\z\8\v\q\x\z\w\8\8\r\1\k\l\3\o\0\z\d\u\k\8\h\a\3\6\t\0\5\v\1\t\f\r\q\4\e\e\g\z\9\e\y\x\d\m\1\l\t\j\h\w\9\i\t\t\7\n\n\d\4\y\e\8\y\d\i\o\x\v\7\1\q\7\9\1\4\b\5\q\9\1\7\b\7\o\x\y\v\e\j\2\x\z\k\9\f\s\m\v\o\m\1\h\1\d\7\n\s\a\d\h\m\o\m\a\6\s\b\z\n\q\u\u\n\z\d\y\3\v\g\7\d\t\q\c\b\5\d\l\a\3\l\o\7\j\d\4\1\q\x\y\d\g\u\u\3\s\4\m\w\4\z\s\5\x\o\f\m\u\h\4\3\x\d\c\a\4\s\c\7\q\r\t\r\u\0\f\k\l\b\r\h\i\v\d\m\0\v\m\5\9\x\f\p\k\3\k\9\f\s\j\k\h\s\5\w\2\w\a\c\e\3\1\x\l\z\5\4\x\e\r\p\a\3\p\8\a\8\z\f\4\5\6\a\1\0\x\n\c\g\f\q\7\o\2\7\z\b\e\a\u\j\m\c\w\4\6\r\6\u\u\9\q\h\h\r\7\d\k\w\l\8\z\l\w\q\6\0\m\u\2\w\s\d\h\3\l\5\3\y\o\0\n\f\n\n\v\a\g\4\7\m\v\q\y\w\y\i\n\g\o\1\u\1\6\r\2\2\l\s\r\k\t\p\j\r\2\y\e\b\m\8\u\1\q\p\j\t\4\t\4\r\a\1\v\2\7\3\1\a\l\h\b\v\v\6\e\9\a\y\9\v\u\o\d\d\3\g\6\c\3\4\z\o\l\y\r\r\n\g\a\7\0\0\4\r\5\u\7\5\j\k\p\5\s\3\3\m\j\0\6\n\r\l\8\4\h\y\n\d\g\7\n\q\v\e\u\z\c\g\f\m\5\l\x\0\r\8\5\f\y\3\j\c\s\o\k\w\q\v\n\e\j\t\s\z\2\p\3\n\2\x\6\v\i\j\a\6\c\w\2\d ]] 00:12:08.342 11:56:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:08.342 11:56:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:12:08.342 [2024-11-29 11:56:13.782366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:08.342 [2024-11-29 11:56:13.782481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70457 ] 00:12:08.601 [2024-11-29 11:56:13.914442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.601 [2024-11-29 11:56:14.040212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.860  [2024-11-29T11:56:14.630Z] Copying: 512/512 [B] (average 500 kBps) 00:12:09.119 00:12:09.120 11:56:14 -- dd/posix.sh@93 -- # [[ mhswj0kd07xjd53v7bppmu3t5aaky2uweuvatg2rz8vqxzw88r1kl3o0zduk8ha36t05v1tfrq4eegz9eyxdm1ltjhw9itt7nnd4ye8ydioxv71q7914b5q917b7oxyvej2xzk9fsmvom1h1d7nsadhmoma6sbznquunzdy3vg7dtqcb5dla3lo7jd41qxydguu3s4mw4zs5xofmuh43xdca4sc7qrtru0fklbrhivdm0vm59xfpk3k9fsjkhs5w2wace31xlz54xerpa3p8a8zf456a10xncgfq7o27zbeaujmcw46r6uu9qhhr7dkwl8zlwq60mu2wsdh3l53yo0nfnnvag47mvqywyingo1u16r22lsrktpjr2yebm8u1qpjt4t4ra1v2731alhbvv6e9ay9vuodd3g6c34zolyrrnga7004r5u75jkp5s33mj06nrl84hyndg7nqveuzcgfm5lx0r85fy3jcsokwqvnejtsz2p3n2x6vija6cw2d == \m\h\s\w\j\0\k\d\0\7\x\j\d\5\3\v\7\b\p\p\m\u\3\t\5\a\a\k\y\2\u\w\e\u\v\a\t\g\2\r\z\8\v\q\x\z\w\8\8\r\1\k\l\3\o\0\z\d\u\k\8\h\a\3\6\t\0\5\v\1\t\f\r\q\4\e\e\g\z\9\e\y\x\d\m\1\l\t\j\h\w\9\i\t\t\7\n\n\d\4\y\e\8\y\d\i\o\x\v\7\1\q\7\9\1\4\b\5\q\9\1\7\b\7\o\x\y\v\e\j\2\x\z\k\9\f\s\m\v\o\m\1\h\1\d\7\n\s\a\d\h\m\o\m\a\6\s\b\z\n\q\u\u\n\z\d\y\3\v\g\7\d\t\q\c\b\5\d\l\a\3\l\o\7\j\d\4\1\q\x\y\d\g\u\u\3\s\4\m\w\4\z\s\5\x\o\f\m\u\h\4\3\x\d\c\a\4\s\c\7\q\r\t\r\u\0\f\k\l\b\r\h\i\v\d\m\0\v\m\5\9\x\f\p\k\3\k\9\f\s\j\k\h\s\5\w\2\w\a\c\e\3\1\x\l\z\5\4\x\e\r\p\a\3\p\8\a\8\z\f\4\5\6\a\1\0\x\n\c\g\f\q\7\o\2\7\z\b\e\a\u\j\m\c\w\4\6\r\6\u\u\9\q\h\h\r\7\d\k\w\l\8\z\l\w\q\6\0\m\u\2\w\s\d\h\3\l\5\3\y\o\0\n\f\n\n\v\a\g\4\7\m\v\q\y\w\y\i\n\g\o\1\u\1\6\r\2\2\l\s\r\k\t\p\j\r\2\y\e\b\m\8\u\1\q\p\j\t\4\t\4\r\a\1\v\2\7\3\1\a\l\h\b\v\v\6\e\9\a\y\9\v\u\o\d\d\3\g\6\c\3\4\z\o\l\y\r\r\n\g\a\7\0\0\4\r\5\u\7\5\j\k\p\5\s\3\3\m\j\0\6\n\r\l\8\4\h\y\n\d\g\7\n\q\v\e\u\z\c\g\f\m\5\l\x\0\r\8\5\f\y\3\j\c\s\o\k\w\q\v\n\e\j\t\s\z\2\p\3\n\2\x\6\v\i\j\a\6\c\w\2\d ]] 00:12:09.120 00:12:09.120 real 0m6.076s 00:12:09.120 user 0m3.594s 00:12:09.120 sys 0m1.497s 00:12:09.120 11:56:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:09.120 11:56:14 -- common/autotest_common.sh@10 -- # set +x 00:12:09.120 ************************************ 00:12:09.120 END TEST dd_flags_misc 00:12:09.120 ************************************ 00:12:09.120 11:56:14 -- dd/posix.sh@131 -- # tests_forced_aio 00:12:09.120 11:56:14 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:12:09.120 * Second test run, disabling liburing, forcing AIO 00:12:09.120 11:56:14 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:12:09.120 11:56:14 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:12:09.120 11:56:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:09.120 11:56:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.120 11:56:14 -- common/autotest_common.sh@10 -- # set +x 00:12:09.120 ************************************ 00:12:09.120 START TEST dd_flag_append_forced_aio 00:12:09.120 ************************************ 00:12:09.120 11:56:14 -- common/autotest_common.sh@1114 -- # append 00:12:09.120 11:56:14 -- dd/posix.sh@16 -- # local dump0 00:12:09.120 11:56:14 -- dd/posix.sh@17 -- # local dump1 00:12:09.120 11:56:14 -- dd/posix.sh@19 -- # gen_bytes 32 00:12:09.120 11:56:14 -- dd/common.sh@98 -- # xtrace_disable 00:12:09.120 11:56:14 -- common/autotest_common.sh@10 -- # set +x 00:12:09.120 11:56:14 -- dd/posix.sh@19 -- # dump0=4duxj8sseiyafvolbhozns6u2lp2jc96 00:12:09.120 11:56:14 -- dd/posix.sh@20 -- # gen_bytes 32 00:12:09.120 11:56:14 -- dd/common.sh@98 -- # xtrace_disable 00:12:09.120 11:56:14 -- common/autotest_common.sh@10 -- # set +x 00:12:09.120 11:56:14 -- dd/posix.sh@20 -- # dump1=l9as5whg9v662lmomscrufal93xs2now 00:12:09.120 11:56:14 -- dd/posix.sh@22 -- # printf %s 4duxj8sseiyafvolbhozns6u2lp2jc96 00:12:09.120 11:56:14 -- dd/posix.sh@23 -- # printf %s l9as5whg9v662lmomscrufal93xs2now 00:12:09.120 11:56:14 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:12:09.120 [2024-11-29 11:56:14.578782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:09.120 [2024-11-29 11:56:14.578906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70488 ] 00:12:09.379 [2024-11-29 11:56:14.712007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.379 [2024-11-29 11:56:14.832365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.638  [2024-11-29T11:56:15.408Z] Copying: 32/32 [B] (average 31 kBps) 00:12:09.897 00:12:09.897 11:56:15 -- dd/posix.sh@27 -- # [[ l9as5whg9v662lmomscrufal93xs2now4duxj8sseiyafvolbhozns6u2lp2jc96 == \l\9\a\s\5\w\h\g\9\v\6\6\2\l\m\o\m\s\c\r\u\f\a\l\9\3\x\s\2\n\o\w\4\d\u\x\j\8\s\s\e\i\y\a\f\v\o\l\b\h\o\z\n\s\6\u\2\l\p\2\j\c\9\6 ]] 00:12:09.897 00:12:09.897 real 0m0.756s 00:12:09.897 user 0m0.448s 00:12:09.897 sys 0m0.182s 00:12:09.897 11:56:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:09.897 11:56:15 -- common/autotest_common.sh@10 -- # set +x 00:12:09.897 ************************************ 00:12:09.897 END TEST dd_flag_append_forced_aio 00:12:09.897 ************************************ 00:12:09.897 11:56:15 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:12:09.897 11:56:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:09.897 11:56:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.897 11:56:15 -- common/autotest_common.sh@10 -- # set +x 00:12:09.897 ************************************ 00:12:09.897 START TEST dd_flag_directory_forced_aio 00:12:09.897 ************************************ 00:12:09.897 11:56:15 -- common/autotest_common.sh@1114 -- # directory 00:12:09.897 11:56:15 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:09.897 11:56:15 -- common/autotest_common.sh@650 -- # local es=0 00:12:09.897 11:56:15 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:09.897 11:56:15 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.897 11:56:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.897 11:56:15 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.897 11:56:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.897 11:56:15 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.897 11:56:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.897 11:56:15 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.897 11:56:15 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:09.897 11:56:15 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:09.897 [2024-11-29 11:56:15.391247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:09.897 [2024-11-29 11:56:15.391391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70516 ] 00:12:10.156 [2024-11-29 11:56:15.529685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.156 [2024-11-29 11:56:15.650661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.416 [2024-11-29 11:56:15.765478] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:12:10.416 [2024-11-29 11:56:15.765563] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:12:10.416 [2024-11-29 11:56:15.765579] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:10.416 [2024-11-29 11:56:15.924145] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:12:10.676 11:56:16 -- common/autotest_common.sh@653 -- # es=236 00:12:10.676 11:56:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:10.676 11:56:16 -- common/autotest_common.sh@662 -- # es=108 00:12:10.676 11:56:16 -- common/autotest_common.sh@663 -- # case "$es" in 00:12:10.676 11:56:16 -- common/autotest_common.sh@670 -- # es=1 00:12:10.676 11:56:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:10.676 11:56:16 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:12:10.676 11:56:16 -- common/autotest_common.sh@650 -- # local es=0 00:12:10.676 11:56:16 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:12:10.676 11:56:16 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.676 11:56:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:10.676 11:56:16 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.676 11:56:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:10.676 11:56:16 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.676 11:56:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:10.676 11:56:16 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.676 11:56:16 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:10.676 11:56:16 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:12:10.676 [2024-11-29 11:56:16.094110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:10.676 [2024-11-29 11:56:16.094257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70525 ] 00:12:10.935 [2024-11-29 11:56:16.234925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.935 [2024-11-29 11:56:16.358689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.205 [2024-11-29 11:56:16.474457] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:12:11.205 [2024-11-29 11:56:16.474546] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:12:11.205 [2024-11-29 11:56:16.474562] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:11.205 [2024-11-29 11:56:16.633033] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:12:11.463 11:56:16 -- common/autotest_common.sh@653 -- # es=236 00:12:11.463 11:56:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:11.463 11:56:16 -- common/autotest_common.sh@662 -- # es=108 00:12:11.463 11:56:16 -- common/autotest_common.sh@663 -- # case "$es" in 00:12:11.463 11:56:16 -- common/autotest_common.sh@670 -- # es=1 00:12:11.463 11:56:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:11.463 00:12:11.463 real 0m1.423s 00:12:11.463 user 0m0.834s 00:12:11.463 sys 0m0.375s 00:12:11.463 11:56:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:11.463 ************************************ 00:12:11.463 END TEST dd_flag_directory_forced_aio 00:12:11.463 ************************************ 00:12:11.463 11:56:16 -- common/autotest_common.sh@10 -- # set +x 00:12:11.463 11:56:16 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:12:11.463 11:56:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:11.463 11:56:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:11.463 11:56:16 -- common/autotest_common.sh@10 -- # set +x 00:12:11.463 ************************************ 00:12:11.463 START TEST dd_flag_nofollow_forced_aio 00:12:11.463 ************************************ 00:12:11.463 11:56:16 -- common/autotest_common.sh@1114 -- # nofollow 00:12:11.463 11:56:16 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:12:11.463 11:56:16 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:12:11.463 11:56:16 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:12:11.463 11:56:16 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:12:11.464 11:56:16 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:11.464 11:56:16 -- common/autotest_common.sh@650 -- # local es=0 00:12:11.464 11:56:16 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:11.464 11:56:16 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.464 11:56:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:11.464 11:56:16 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.464 11:56:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:11.464 11:56:16 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.464 11:56:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:11.464 11:56:16 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.464 11:56:16 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:11.464 11:56:16 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:11.464 [2024-11-29 11:56:16.863869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:11.464 [2024-11-29 11:56:16.863991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70559 ] 00:12:11.721 [2024-11-29 11:56:16.997035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.721 [2024-11-29 11:56:17.120139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.977 [2024-11-29 11:56:17.236699] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:12:11.977 [2024-11-29 11:56:17.236786] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:12:11.977 [2024-11-29 11:56:17.236803] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:11.977 [2024-11-29 11:56:17.400789] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:12:12.235 11:56:17 -- common/autotest_common.sh@653 -- # es=216 00:12:12.235 11:56:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:12.235 11:56:17 -- common/autotest_common.sh@662 -- # es=88 00:12:12.235 11:56:17 -- common/autotest_common.sh@663 -- # case "$es" in 00:12:12.235 11:56:17 -- common/autotest_common.sh@670 -- # es=1 00:12:12.235 11:56:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:12.235 11:56:17 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:12:12.235 11:56:17 -- common/autotest_common.sh@650 -- # local es=0 00:12:12.235 11:56:17 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:12:12.235 11:56:17 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:12.236 11:56:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:12.236 11:56:17 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:12.236 11:56:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:12.236 11:56:17 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:12.236 11:56:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:12.236 11:56:17 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:12.236 11:56:17 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:12.236 11:56:17 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:12:12.236 [2024-11-29 11:56:17.573628] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:12.236 [2024-11-29 11:56:17.573782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70569 ] 00:12:12.236 [2024-11-29 11:56:17.717160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.495 [2024-11-29 11:56:17.847833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.495 [2024-11-29 11:56:17.968725] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:12:12.495 [2024-11-29 11:56:17.968831] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:12:12.495 [2024-11-29 11:56:17.968855] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:12.753 [2024-11-29 11:56:18.131499] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:12:12.753 11:56:18 -- common/autotest_common.sh@653 -- # es=216 00:12:12.753 11:56:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:12.753 11:56:18 -- common/autotest_common.sh@662 -- # es=88 00:12:12.753 11:56:18 -- common/autotest_common.sh@663 -- # case "$es" in 00:12:12.753 11:56:18 -- common/autotest_common.sh@670 -- # es=1 00:12:12.753 11:56:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:12.753 11:56:18 -- dd/posix.sh@46 -- # gen_bytes 512 00:12:12.753 11:56:18 -- dd/common.sh@98 -- # xtrace_disable 00:12:12.754 11:56:18 -- common/autotest_common.sh@10 -- # set +x 00:12:12.754 11:56:18 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:13.012 [2024-11-29 11:56:18.310957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:13.012 [2024-11-29 11:56:18.311091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70582 ] 00:12:13.012 [2024-11-29 11:56:18.448662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.271 [2024-11-29 11:56:18.570196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.271  [2024-11-29T11:56:19.040Z] Copying: 512/512 [B] (average 500 kBps) 00:12:13.529 00:12:13.529 11:56:19 -- dd/posix.sh@49 -- # [[ 5r051ni8143pl33tess9ywvfsciqbx8iy0y17r3e9hjx62xhn2btr9l8k5c1hwh0sieoz8dfajk1hz9pqextzavmesd17n3q47zyup5lbf8o01o5x89zvkj6m7ncn6cc5hzrec4olr7g3zzgkgsqfj7a6jhjye2plymk371tiovy3b2x17tz4ddga7xbl6ja7833n1g9xj7nm8ts0t6shq6q633eatlavwqoa8ui409030cst4djzi44q0kwd9l8zukt7kfpkc3hgem98258hhl9ctpm8r59zitu91kxwl9gakzkcr6yvbu4eghwwmdnitll2yeznxkssp1p4bxyosnl5olrkyrfnb2snpe1ar8tvq5751bech4bklabplms6ja0hyns46ysd8vkrb2g8zdg2yi1mwuqts0lzuiz29pxhe4rge5y8d6g3v12hte4jdpysdc8cy84gcvlmpibrdiqapqwj683zg8lzdnx21qnbywvc24f56ddghgje60a == \5\r\0\5\1\n\i\8\1\4\3\p\l\3\3\t\e\s\s\9\y\w\v\f\s\c\i\q\b\x\8\i\y\0\y\1\7\r\3\e\9\h\j\x\6\2\x\h\n\2\b\t\r\9\l\8\k\5\c\1\h\w\h\0\s\i\e\o\z\8\d\f\a\j\k\1\h\z\9\p\q\e\x\t\z\a\v\m\e\s\d\1\7\n\3\q\4\7\z\y\u\p\5\l\b\f\8\o\0\1\o\5\x\8\9\z\v\k\j\6\m\7\n\c\n\6\c\c\5\h\z\r\e\c\4\o\l\r\7\g\3\z\z\g\k\g\s\q\f\j\7\a\6\j\h\j\y\e\2\p\l\y\m\k\3\7\1\t\i\o\v\y\3\b\2\x\1\7\t\z\4\d\d\g\a\7\x\b\l\6\j\a\7\8\3\3\n\1\g\9\x\j\7\n\m\8\t\s\0\t\6\s\h\q\6\q\6\3\3\e\a\t\l\a\v\w\q\o\a\8\u\i\4\0\9\0\3\0\c\s\t\4\d\j\z\i\4\4\q\0\k\w\d\9\l\8\z\u\k\t\7\k\f\p\k\c\3\h\g\e\m\9\8\2\5\8\h\h\l\9\c\t\p\m\8\r\5\9\z\i\t\u\9\1\k\x\w\l\9\g\a\k\z\k\c\r\6\y\v\b\u\4\e\g\h\w\w\m\d\n\i\t\l\l\2\y\e\z\n\x\k\s\s\p\1\p\4\b\x\y\o\s\n\l\5\o\l\r\k\y\r\f\n\b\2\s\n\p\e\1\a\r\8\t\v\q\5\7\5\1\b\e\c\h\4\b\k\l\a\b\p\l\m\s\6\j\a\0\h\y\n\s\4\6\y\s\d\8\v\k\r\b\2\g\8\z\d\g\2\y\i\1\m\w\u\q\t\s\0\l\z\u\i\z\2\9\p\x\h\e\4\r\g\e\5\y\8\d\6\g\3\v\1\2\h\t\e\4\j\d\p\y\s\d\c\8\c\y\8\4\g\c\v\l\m\p\i\b\r\d\i\q\a\p\q\w\j\6\8\3\z\g\8\l\z\d\n\x\2\1\q\n\b\y\w\v\c\2\4\f\5\6\d\d\g\h\g\j\e\6\0\a ]] 00:12:13.529 00:12:13.529 real 0m2.198s 00:12:13.529 user 0m1.288s 00:12:13.529 sys 0m0.580s 00:12:13.529 11:56:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:13.529 11:56:19 -- common/autotest_common.sh@10 -- # set +x 00:12:13.529 ************************************ 00:12:13.529 END TEST dd_flag_nofollow_forced_aio 00:12:13.529 ************************************ 00:12:13.788 11:56:19 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:12:13.788 11:56:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:13.788 11:56:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:13.788 11:56:19 -- common/autotest_common.sh@10 -- # set +x 00:12:13.788 ************************************ 00:12:13.788 START TEST dd_flag_noatime_forced_aio 00:12:13.788 ************************************ 00:12:13.788 11:56:19 -- common/autotest_common.sh@1114 -- # noatime 00:12:13.788 11:56:19 -- dd/posix.sh@53 -- # local atime_if 00:12:13.788 11:56:19 -- dd/posix.sh@54 -- # local atime_of 00:12:13.788 11:56:19 -- dd/posix.sh@58 -- # gen_bytes 512 00:12:13.788 11:56:19 -- dd/common.sh@98 -- # xtrace_disable 00:12:13.788 11:56:19 -- common/autotest_common.sh@10 -- # set +x 00:12:13.788 11:56:19 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:13.788 11:56:19 -- dd/posix.sh@60 -- # atime_if=1732881378 00:12:13.788 11:56:19 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:13.788 11:56:19 -- dd/posix.sh@61 -- # atime_of=1732881379 00:12:13.788 11:56:19 -- dd/posix.sh@66 -- # sleep 1 00:12:14.722 11:56:20 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:14.722 [2024-11-29 11:56:20.144793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:14.722 [2024-11-29 11:56:20.144923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70622 ] 00:12:14.980 [2024-11-29 11:56:20.286431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.980 [2024-11-29 11:56:20.396723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.239  [2024-11-29T11:56:21.020Z] Copying: 512/512 [B] (average 500 kBps) 00:12:15.509 00:12:15.509 11:56:20 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:15.509 11:56:20 -- dd/posix.sh@69 -- # (( atime_if == 1732881378 )) 00:12:15.509 11:56:20 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:15.509 11:56:20 -- dd/posix.sh@70 -- # (( atime_of == 1732881379 )) 00:12:15.509 11:56:20 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:15.509 [2024-11-29 11:56:20.869339] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:15.509 [2024-11-29 11:56:20.869577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70635 ] 00:12:15.509 [2024-11-29 11:56:21.013049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.767 [2024-11-29 11:56:21.122021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.767  [2024-11-29T11:56:21.537Z] Copying: 512/512 [B] (average 500 kBps) 00:12:16.026 00:12:16.286 11:56:21 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:16.286 11:56:21 -- dd/posix.sh@73 -- # (( atime_if < 1732881381 )) 00:12:16.286 00:12:16.286 real 0m2.478s 00:12:16.286 user 0m0.830s 00:12:16.286 sys 0m0.405s 00:12:16.286 11:56:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:16.286 ************************************ 00:12:16.286 END TEST dd_flag_noatime_forced_aio 00:12:16.286 11:56:21 -- common/autotest_common.sh@10 -- # set +x 00:12:16.286 ************************************ 00:12:16.286 11:56:21 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:12:16.286 11:56:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:16.286 11:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:16.286 11:56:21 -- common/autotest_common.sh@10 -- # set +x 00:12:16.286 ************************************ 00:12:16.286 START TEST dd_flags_misc_forced_aio 00:12:16.286 ************************************ 00:12:16.286 11:56:21 -- common/autotest_common.sh@1114 -- # io 00:12:16.286 11:56:21 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:12:16.286 11:56:21 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:12:16.286 11:56:21 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:12:16.286 11:56:21 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:12:16.286 11:56:21 -- dd/posix.sh@86 -- # gen_bytes 512 00:12:16.286 11:56:21 -- dd/common.sh@98 -- # xtrace_disable 00:12:16.286 11:56:21 -- common/autotest_common.sh@10 -- # set +x 00:12:16.286 11:56:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:16.286 11:56:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:12:16.286 [2024-11-29 11:56:21.665542] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:16.286 [2024-11-29 11:56:21.665682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70667 ] 00:12:16.545 [2024-11-29 11:56:21.805967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.545 [2024-11-29 11:56:21.931746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.545  [2024-11-29T11:56:22.625Z] Copying: 512/512 [B] (average 500 kBps) 00:12:17.114 00:12:17.114 11:56:22 -- dd/posix.sh@93 -- # [[ 37emxjw1drnhf8u32rqbcnvdhhp1ebh2oeey0yi9q48iwa9wymescux8bx5yigtunodymi6mx05phtv6048f9u3mo8jybg6h4px3tkjg2jp929kxq3fzqvu9n8ruxoqchsrggr3iiklb3161gh4kkdvwckl2qok5qarmqzsrcslqbccounhextpjlu3d4yep6k0m66n6veugxahmioh5u9zxyk3g6fki4c94ivwm7tlmwyvixvi3w3h1b1fszcgwa5vv8crtwjyl2n5r1dun3j3o7rzfas3fux5ltr15etu7mfsi8vkmxi8rj96q0bs688ejubh2chz3y2sz8euhrmrbifradh5sas63510yganvlajrqrqarxi4bb41nwyt3rbrlab18phf4lkiw9ukx4jbzzj8592gh6bgqmj3t7ytvs0g17f49oo1didfnn99gqa4ixfwgggs5c8sicaxu60f23bp9oupanvd0cw91y0aqfx4ddzbsjrwl4x3ubv6 == \3\7\e\m\x\j\w\1\d\r\n\h\f\8\u\3\2\r\q\b\c\n\v\d\h\h\p\1\e\b\h\2\o\e\e\y\0\y\i\9\q\4\8\i\w\a\9\w\y\m\e\s\c\u\x\8\b\x\5\y\i\g\t\u\n\o\d\y\m\i\6\m\x\0\5\p\h\t\v\6\0\4\8\f\9\u\3\m\o\8\j\y\b\g\6\h\4\p\x\3\t\k\j\g\2\j\p\9\2\9\k\x\q\3\f\z\q\v\u\9\n\8\r\u\x\o\q\c\h\s\r\g\g\r\3\i\i\k\l\b\3\1\6\1\g\h\4\k\k\d\v\w\c\k\l\2\q\o\k\5\q\a\r\m\q\z\s\r\c\s\l\q\b\c\c\o\u\n\h\e\x\t\p\j\l\u\3\d\4\y\e\p\6\k\0\m\6\6\n\6\v\e\u\g\x\a\h\m\i\o\h\5\u\9\z\x\y\k\3\g\6\f\k\i\4\c\9\4\i\v\w\m\7\t\l\m\w\y\v\i\x\v\i\3\w\3\h\1\b\1\f\s\z\c\g\w\a\5\v\v\8\c\r\t\w\j\y\l\2\n\5\r\1\d\u\n\3\j\3\o\7\r\z\f\a\s\3\f\u\x\5\l\t\r\1\5\e\t\u\7\m\f\s\i\8\v\k\m\x\i\8\r\j\9\6\q\0\b\s\6\8\8\e\j\u\b\h\2\c\h\z\3\y\2\s\z\8\e\u\h\r\m\r\b\i\f\r\a\d\h\5\s\a\s\6\3\5\1\0\y\g\a\n\v\l\a\j\r\q\r\q\a\r\x\i\4\b\b\4\1\n\w\y\t\3\r\b\r\l\a\b\1\8\p\h\f\4\l\k\i\w\9\u\k\x\4\j\b\z\z\j\8\5\9\2\g\h\6\b\g\q\m\j\3\t\7\y\t\v\s\0\g\1\7\f\4\9\o\o\1\d\i\d\f\n\n\9\9\g\q\a\4\i\x\f\w\g\g\g\s\5\c\8\s\i\c\a\x\u\6\0\f\2\3\b\p\9\o\u\p\a\n\v\d\0\c\w\9\1\y\0\a\q\f\x\4\d\d\z\b\s\j\r\w\l\4\x\3\u\b\v\6 ]] 00:12:17.114 11:56:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:17.114 11:56:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:12:17.114 [2024-11-29 11:56:22.422983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:17.114 [2024-11-29 11:56:22.423117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70680 ] 00:12:17.114 [2024-11-29 11:56:22.561819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.373 [2024-11-29 11:56:22.693554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.373  [2024-11-29T11:56:23.143Z] Copying: 512/512 [B] (average 500 kBps) 00:12:17.632 00:12:17.632 11:56:23 -- dd/posix.sh@93 -- # [[ 37emxjw1drnhf8u32rqbcnvdhhp1ebh2oeey0yi9q48iwa9wymescux8bx5yigtunodymi6mx05phtv6048f9u3mo8jybg6h4px3tkjg2jp929kxq3fzqvu9n8ruxoqchsrggr3iiklb3161gh4kkdvwckl2qok5qarmqzsrcslqbccounhextpjlu3d4yep6k0m66n6veugxahmioh5u9zxyk3g6fki4c94ivwm7tlmwyvixvi3w3h1b1fszcgwa5vv8crtwjyl2n5r1dun3j3o7rzfas3fux5ltr15etu7mfsi8vkmxi8rj96q0bs688ejubh2chz3y2sz8euhrmrbifradh5sas63510yganvlajrqrqarxi4bb41nwyt3rbrlab18phf4lkiw9ukx4jbzzj8592gh6bgqmj3t7ytvs0g17f49oo1didfnn99gqa4ixfwgggs5c8sicaxu60f23bp9oupanvd0cw91y0aqfx4ddzbsjrwl4x3ubv6 == \3\7\e\m\x\j\w\1\d\r\n\h\f\8\u\3\2\r\q\b\c\n\v\d\h\h\p\1\e\b\h\2\o\e\e\y\0\y\i\9\q\4\8\i\w\a\9\w\y\m\e\s\c\u\x\8\b\x\5\y\i\g\t\u\n\o\d\y\m\i\6\m\x\0\5\p\h\t\v\6\0\4\8\f\9\u\3\m\o\8\j\y\b\g\6\h\4\p\x\3\t\k\j\g\2\j\p\9\2\9\k\x\q\3\f\z\q\v\u\9\n\8\r\u\x\o\q\c\h\s\r\g\g\r\3\i\i\k\l\b\3\1\6\1\g\h\4\k\k\d\v\w\c\k\l\2\q\o\k\5\q\a\r\m\q\z\s\r\c\s\l\q\b\c\c\o\u\n\h\e\x\t\p\j\l\u\3\d\4\y\e\p\6\k\0\m\6\6\n\6\v\e\u\g\x\a\h\m\i\o\h\5\u\9\z\x\y\k\3\g\6\f\k\i\4\c\9\4\i\v\w\m\7\t\l\m\w\y\v\i\x\v\i\3\w\3\h\1\b\1\f\s\z\c\g\w\a\5\v\v\8\c\r\t\w\j\y\l\2\n\5\r\1\d\u\n\3\j\3\o\7\r\z\f\a\s\3\f\u\x\5\l\t\r\1\5\e\t\u\7\m\f\s\i\8\v\k\m\x\i\8\r\j\9\6\q\0\b\s\6\8\8\e\j\u\b\h\2\c\h\z\3\y\2\s\z\8\e\u\h\r\m\r\b\i\f\r\a\d\h\5\s\a\s\6\3\5\1\0\y\g\a\n\v\l\a\j\r\q\r\q\a\r\x\i\4\b\b\4\1\n\w\y\t\3\r\b\r\l\a\b\1\8\p\h\f\4\l\k\i\w\9\u\k\x\4\j\b\z\z\j\8\5\9\2\g\h\6\b\g\q\m\j\3\t\7\y\t\v\s\0\g\1\7\f\4\9\o\o\1\d\i\d\f\n\n\9\9\g\q\a\4\i\x\f\w\g\g\g\s\5\c\8\s\i\c\a\x\u\6\0\f\2\3\b\p\9\o\u\p\a\n\v\d\0\c\w\9\1\y\0\a\q\f\x\4\d\d\z\b\s\j\r\w\l\4\x\3\u\b\v\6 ]] 00:12:17.632 11:56:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:17.632 11:56:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:12:17.893 [2024-11-29 11:56:23.190422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:17.893 [2024-11-29 11:56:23.190606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70687 ] 00:12:17.893 [2024-11-29 11:56:23.328896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.151 [2024-11-29 11:56:23.453869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.151  [2024-11-29T11:56:23.921Z] Copying: 512/512 [B] (average 166 kBps) 00:12:18.410 00:12:18.410 11:56:23 -- dd/posix.sh@93 -- # [[ 37emxjw1drnhf8u32rqbcnvdhhp1ebh2oeey0yi9q48iwa9wymescux8bx5yigtunodymi6mx05phtv6048f9u3mo8jybg6h4px3tkjg2jp929kxq3fzqvu9n8ruxoqchsrggr3iiklb3161gh4kkdvwckl2qok5qarmqzsrcslqbccounhextpjlu3d4yep6k0m66n6veugxahmioh5u9zxyk3g6fki4c94ivwm7tlmwyvixvi3w3h1b1fszcgwa5vv8crtwjyl2n5r1dun3j3o7rzfas3fux5ltr15etu7mfsi8vkmxi8rj96q0bs688ejubh2chz3y2sz8euhrmrbifradh5sas63510yganvlajrqrqarxi4bb41nwyt3rbrlab18phf4lkiw9ukx4jbzzj8592gh6bgqmj3t7ytvs0g17f49oo1didfnn99gqa4ixfwgggs5c8sicaxu60f23bp9oupanvd0cw91y0aqfx4ddzbsjrwl4x3ubv6 == \3\7\e\m\x\j\w\1\d\r\n\h\f\8\u\3\2\r\q\b\c\n\v\d\h\h\p\1\e\b\h\2\o\e\e\y\0\y\i\9\q\4\8\i\w\a\9\w\y\m\e\s\c\u\x\8\b\x\5\y\i\g\t\u\n\o\d\y\m\i\6\m\x\0\5\p\h\t\v\6\0\4\8\f\9\u\3\m\o\8\j\y\b\g\6\h\4\p\x\3\t\k\j\g\2\j\p\9\2\9\k\x\q\3\f\z\q\v\u\9\n\8\r\u\x\o\q\c\h\s\r\g\g\r\3\i\i\k\l\b\3\1\6\1\g\h\4\k\k\d\v\w\c\k\l\2\q\o\k\5\q\a\r\m\q\z\s\r\c\s\l\q\b\c\c\o\u\n\h\e\x\t\p\j\l\u\3\d\4\y\e\p\6\k\0\m\6\6\n\6\v\e\u\g\x\a\h\m\i\o\h\5\u\9\z\x\y\k\3\g\6\f\k\i\4\c\9\4\i\v\w\m\7\t\l\m\w\y\v\i\x\v\i\3\w\3\h\1\b\1\f\s\z\c\g\w\a\5\v\v\8\c\r\t\w\j\y\l\2\n\5\r\1\d\u\n\3\j\3\o\7\r\z\f\a\s\3\f\u\x\5\l\t\r\1\5\e\t\u\7\m\f\s\i\8\v\k\m\x\i\8\r\j\9\6\q\0\b\s\6\8\8\e\j\u\b\h\2\c\h\z\3\y\2\s\z\8\e\u\h\r\m\r\b\i\f\r\a\d\h\5\s\a\s\6\3\5\1\0\y\g\a\n\v\l\a\j\r\q\r\q\a\r\x\i\4\b\b\4\1\n\w\y\t\3\r\b\r\l\a\b\1\8\p\h\f\4\l\k\i\w\9\u\k\x\4\j\b\z\z\j\8\5\9\2\g\h\6\b\g\q\m\j\3\t\7\y\t\v\s\0\g\1\7\f\4\9\o\o\1\d\i\d\f\n\n\9\9\g\q\a\4\i\x\f\w\g\g\g\s\5\c\8\s\i\c\a\x\u\6\0\f\2\3\b\p\9\o\u\p\a\n\v\d\0\c\w\9\1\y\0\a\q\f\x\4\d\d\z\b\s\j\r\w\l\4\x\3\u\b\v\6 ]] 00:12:18.410 11:56:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:18.410 11:56:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:12:18.668 [2024-11-29 11:56:23.967098] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:18.668 [2024-11-29 11:56:23.967230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70695 ] 00:12:18.668 [2024-11-29 11:56:24.105557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.927 [2024-11-29 11:56:24.233225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.927  [2024-11-29T11:56:24.697Z] Copying: 512/512 [B] (average 250 kBps) 00:12:19.186 00:12:19.186 11:56:24 -- dd/posix.sh@93 -- # [[ 37emxjw1drnhf8u32rqbcnvdhhp1ebh2oeey0yi9q48iwa9wymescux8bx5yigtunodymi6mx05phtv6048f9u3mo8jybg6h4px3tkjg2jp929kxq3fzqvu9n8ruxoqchsrggr3iiklb3161gh4kkdvwckl2qok5qarmqzsrcslqbccounhextpjlu3d4yep6k0m66n6veugxahmioh5u9zxyk3g6fki4c94ivwm7tlmwyvixvi3w3h1b1fszcgwa5vv8crtwjyl2n5r1dun3j3o7rzfas3fux5ltr15etu7mfsi8vkmxi8rj96q0bs688ejubh2chz3y2sz8euhrmrbifradh5sas63510yganvlajrqrqarxi4bb41nwyt3rbrlab18phf4lkiw9ukx4jbzzj8592gh6bgqmj3t7ytvs0g17f49oo1didfnn99gqa4ixfwgggs5c8sicaxu60f23bp9oupanvd0cw91y0aqfx4ddzbsjrwl4x3ubv6 == \3\7\e\m\x\j\w\1\d\r\n\h\f\8\u\3\2\r\q\b\c\n\v\d\h\h\p\1\e\b\h\2\o\e\e\y\0\y\i\9\q\4\8\i\w\a\9\w\y\m\e\s\c\u\x\8\b\x\5\y\i\g\t\u\n\o\d\y\m\i\6\m\x\0\5\p\h\t\v\6\0\4\8\f\9\u\3\m\o\8\j\y\b\g\6\h\4\p\x\3\t\k\j\g\2\j\p\9\2\9\k\x\q\3\f\z\q\v\u\9\n\8\r\u\x\o\q\c\h\s\r\g\g\r\3\i\i\k\l\b\3\1\6\1\g\h\4\k\k\d\v\w\c\k\l\2\q\o\k\5\q\a\r\m\q\z\s\r\c\s\l\q\b\c\c\o\u\n\h\e\x\t\p\j\l\u\3\d\4\y\e\p\6\k\0\m\6\6\n\6\v\e\u\g\x\a\h\m\i\o\h\5\u\9\z\x\y\k\3\g\6\f\k\i\4\c\9\4\i\v\w\m\7\t\l\m\w\y\v\i\x\v\i\3\w\3\h\1\b\1\f\s\z\c\g\w\a\5\v\v\8\c\r\t\w\j\y\l\2\n\5\r\1\d\u\n\3\j\3\o\7\r\z\f\a\s\3\f\u\x\5\l\t\r\1\5\e\t\u\7\m\f\s\i\8\v\k\m\x\i\8\r\j\9\6\q\0\b\s\6\8\8\e\j\u\b\h\2\c\h\z\3\y\2\s\z\8\e\u\h\r\m\r\b\i\f\r\a\d\h\5\s\a\s\6\3\5\1\0\y\g\a\n\v\l\a\j\r\q\r\q\a\r\x\i\4\b\b\4\1\n\w\y\t\3\r\b\r\l\a\b\1\8\p\h\f\4\l\k\i\w\9\u\k\x\4\j\b\z\z\j\8\5\9\2\g\h\6\b\g\q\m\j\3\t\7\y\t\v\s\0\g\1\7\f\4\9\o\o\1\d\i\d\f\n\n\9\9\g\q\a\4\i\x\f\w\g\g\g\s\5\c\8\s\i\c\a\x\u\6\0\f\2\3\b\p\9\o\u\p\a\n\v\d\0\c\w\9\1\y\0\a\q\f\x\4\d\d\z\b\s\j\r\w\l\4\x\3\u\b\v\6 ]] 00:12:19.186 11:56:24 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:12:19.186 11:56:24 -- dd/posix.sh@86 -- # gen_bytes 512 00:12:19.186 11:56:24 -- dd/common.sh@98 -- # xtrace_disable 00:12:19.186 11:56:24 -- common/autotest_common.sh@10 -- # set +x 00:12:19.186 11:56:24 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:19.186 11:56:24 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:12:19.444 [2024-11-29 11:56:24.746194] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:19.444 [2024-11-29 11:56:24.746330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70708 ] 00:12:19.444 [2024-11-29 11:56:24.884023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.702 [2024-11-29 11:56:25.012590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.702  [2024-11-29T11:56:25.472Z] Copying: 512/512 [B] (average 500 kBps) 00:12:19.961 00:12:19.961 11:56:25 -- dd/posix.sh@93 -- # [[ lymyx1c1tza6uahax004v10k2sxuss3f95ktzp6evb8e7109s8c14py04gg1x7bvk7vvq3ix5dw8xx4p1pm8enxpj1bmpnidudg8b8zi92jr9uuzd7loq45krz8qnzggpd7l0oakv374qgucwuxbt1xyd9wk5ce1cx4t8x0rrnhdjjdflfxky01mivo4ashxmynp0jk9h7zzfzicpqof6gc0ipfulko27qehwhkvzdy3saba7kr9sbbfksszb5ulwtg4wcr4xk7xwpn5ahw7uxsmghw5cgsj8btcokl82oxv340zt7u6xlhfxawwg4zi46sneuw7ivcvf717pd2fvwoemen5ba8hirc336wr3fvxo8upaqwqzgs0jd5y5s0nw3ney4mlnfj3wsf1t6yifam1o83pw3oo9c1wqbfotsvysaj8naqh1gnptr28dgzs9np8qq1705gveds61ukn8jsstyeakars0zxq5bspu2sgt84t5i2d0yj23reymzi1 == \l\y\m\y\x\1\c\1\t\z\a\6\u\a\h\a\x\0\0\4\v\1\0\k\2\s\x\u\s\s\3\f\9\5\k\t\z\p\6\e\v\b\8\e\7\1\0\9\s\8\c\1\4\p\y\0\4\g\g\1\x\7\b\v\k\7\v\v\q\3\i\x\5\d\w\8\x\x\4\p\1\p\m\8\e\n\x\p\j\1\b\m\p\n\i\d\u\d\g\8\b\8\z\i\9\2\j\r\9\u\u\z\d\7\l\o\q\4\5\k\r\z\8\q\n\z\g\g\p\d\7\l\0\o\a\k\v\3\7\4\q\g\u\c\w\u\x\b\t\1\x\y\d\9\w\k\5\c\e\1\c\x\4\t\8\x\0\r\r\n\h\d\j\j\d\f\l\f\x\k\y\0\1\m\i\v\o\4\a\s\h\x\m\y\n\p\0\j\k\9\h\7\z\z\f\z\i\c\p\q\o\f\6\g\c\0\i\p\f\u\l\k\o\2\7\q\e\h\w\h\k\v\z\d\y\3\s\a\b\a\7\k\r\9\s\b\b\f\k\s\s\z\b\5\u\l\w\t\g\4\w\c\r\4\x\k\7\x\w\p\n\5\a\h\w\7\u\x\s\m\g\h\w\5\c\g\s\j\8\b\t\c\o\k\l\8\2\o\x\v\3\4\0\z\t\7\u\6\x\l\h\f\x\a\w\w\g\4\z\i\4\6\s\n\e\u\w\7\i\v\c\v\f\7\1\7\p\d\2\f\v\w\o\e\m\e\n\5\b\a\8\h\i\r\c\3\3\6\w\r\3\f\v\x\o\8\u\p\a\q\w\q\z\g\s\0\j\d\5\y\5\s\0\n\w\3\n\e\y\4\m\l\n\f\j\3\w\s\f\1\t\6\y\i\f\a\m\1\o\8\3\p\w\3\o\o\9\c\1\w\q\b\f\o\t\s\v\y\s\a\j\8\n\a\q\h\1\g\n\p\t\r\2\8\d\g\z\s\9\n\p\8\q\q\1\7\0\5\g\v\e\d\s\6\1\u\k\n\8\j\s\s\t\y\e\a\k\a\r\s\0\z\x\q\5\b\s\p\u\2\s\g\t\8\4\t\5\i\2\d\0\y\j\2\3\r\e\y\m\z\i\1 ]] 00:12:19.961 11:56:25 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:19.961 11:56:25 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:12:20.220 [2024-11-29 11:56:25.521701] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:20.220 [2024-11-29 11:56:25.521878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70721 ] 00:12:20.220 [2024-11-29 11:56:25.661015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.478 [2024-11-29 11:56:25.775948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.478  [2024-11-29T11:56:26.248Z] Copying: 512/512 [B] (average 500 kBps) 00:12:20.737 00:12:20.737 11:56:26 -- dd/posix.sh@93 -- # [[ lymyx1c1tza6uahax004v10k2sxuss3f95ktzp6evb8e7109s8c14py04gg1x7bvk7vvq3ix5dw8xx4p1pm8enxpj1bmpnidudg8b8zi92jr9uuzd7loq45krz8qnzggpd7l0oakv374qgucwuxbt1xyd9wk5ce1cx4t8x0rrnhdjjdflfxky01mivo4ashxmynp0jk9h7zzfzicpqof6gc0ipfulko27qehwhkvzdy3saba7kr9sbbfksszb5ulwtg4wcr4xk7xwpn5ahw7uxsmghw5cgsj8btcokl82oxv340zt7u6xlhfxawwg4zi46sneuw7ivcvf717pd2fvwoemen5ba8hirc336wr3fvxo8upaqwqzgs0jd5y5s0nw3ney4mlnfj3wsf1t6yifam1o83pw3oo9c1wqbfotsvysaj8naqh1gnptr28dgzs9np8qq1705gveds61ukn8jsstyeakars0zxq5bspu2sgt84t5i2d0yj23reymzi1 == \l\y\m\y\x\1\c\1\t\z\a\6\u\a\h\a\x\0\0\4\v\1\0\k\2\s\x\u\s\s\3\f\9\5\k\t\z\p\6\e\v\b\8\e\7\1\0\9\s\8\c\1\4\p\y\0\4\g\g\1\x\7\b\v\k\7\v\v\q\3\i\x\5\d\w\8\x\x\4\p\1\p\m\8\e\n\x\p\j\1\b\m\p\n\i\d\u\d\g\8\b\8\z\i\9\2\j\r\9\u\u\z\d\7\l\o\q\4\5\k\r\z\8\q\n\z\g\g\p\d\7\l\0\o\a\k\v\3\7\4\q\g\u\c\w\u\x\b\t\1\x\y\d\9\w\k\5\c\e\1\c\x\4\t\8\x\0\r\r\n\h\d\j\j\d\f\l\f\x\k\y\0\1\m\i\v\o\4\a\s\h\x\m\y\n\p\0\j\k\9\h\7\z\z\f\z\i\c\p\q\o\f\6\g\c\0\i\p\f\u\l\k\o\2\7\q\e\h\w\h\k\v\z\d\y\3\s\a\b\a\7\k\r\9\s\b\b\f\k\s\s\z\b\5\u\l\w\t\g\4\w\c\r\4\x\k\7\x\w\p\n\5\a\h\w\7\u\x\s\m\g\h\w\5\c\g\s\j\8\b\t\c\o\k\l\8\2\o\x\v\3\4\0\z\t\7\u\6\x\l\h\f\x\a\w\w\g\4\z\i\4\6\s\n\e\u\w\7\i\v\c\v\f\7\1\7\p\d\2\f\v\w\o\e\m\e\n\5\b\a\8\h\i\r\c\3\3\6\w\r\3\f\v\x\o\8\u\p\a\q\w\q\z\g\s\0\j\d\5\y\5\s\0\n\w\3\n\e\y\4\m\l\n\f\j\3\w\s\f\1\t\6\y\i\f\a\m\1\o\8\3\p\w\3\o\o\9\c\1\w\q\b\f\o\t\s\v\y\s\a\j\8\n\a\q\h\1\g\n\p\t\r\2\8\d\g\z\s\9\n\p\8\q\q\1\7\0\5\g\v\e\d\s\6\1\u\k\n\8\j\s\s\t\y\e\a\k\a\r\s\0\z\x\q\5\b\s\p\u\2\s\g\t\8\4\t\5\i\2\d\0\y\j\2\3\r\e\y\m\z\i\1 ]] 00:12:20.737 11:56:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:20.737 11:56:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:12:20.996 [2024-11-29 11:56:26.266548] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:20.996 [2024-11-29 11:56:26.266668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70723 ] 00:12:20.996 [2024-11-29 11:56:26.405587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.254 [2024-11-29 11:56:26.528238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.254  [2024-11-29T11:56:27.023Z] Copying: 512/512 [B] (average 500 kBps) 00:12:21.512 00:12:21.512 11:56:26 -- dd/posix.sh@93 -- # [[ lymyx1c1tza6uahax004v10k2sxuss3f95ktzp6evb8e7109s8c14py04gg1x7bvk7vvq3ix5dw8xx4p1pm8enxpj1bmpnidudg8b8zi92jr9uuzd7loq45krz8qnzggpd7l0oakv374qgucwuxbt1xyd9wk5ce1cx4t8x0rrnhdjjdflfxky01mivo4ashxmynp0jk9h7zzfzicpqof6gc0ipfulko27qehwhkvzdy3saba7kr9sbbfksszb5ulwtg4wcr4xk7xwpn5ahw7uxsmghw5cgsj8btcokl82oxv340zt7u6xlhfxawwg4zi46sneuw7ivcvf717pd2fvwoemen5ba8hirc336wr3fvxo8upaqwqzgs0jd5y5s0nw3ney4mlnfj3wsf1t6yifam1o83pw3oo9c1wqbfotsvysaj8naqh1gnptr28dgzs9np8qq1705gveds61ukn8jsstyeakars0zxq5bspu2sgt84t5i2d0yj23reymzi1 == \l\y\m\y\x\1\c\1\t\z\a\6\u\a\h\a\x\0\0\4\v\1\0\k\2\s\x\u\s\s\3\f\9\5\k\t\z\p\6\e\v\b\8\e\7\1\0\9\s\8\c\1\4\p\y\0\4\g\g\1\x\7\b\v\k\7\v\v\q\3\i\x\5\d\w\8\x\x\4\p\1\p\m\8\e\n\x\p\j\1\b\m\p\n\i\d\u\d\g\8\b\8\z\i\9\2\j\r\9\u\u\z\d\7\l\o\q\4\5\k\r\z\8\q\n\z\g\g\p\d\7\l\0\o\a\k\v\3\7\4\q\g\u\c\w\u\x\b\t\1\x\y\d\9\w\k\5\c\e\1\c\x\4\t\8\x\0\r\r\n\h\d\j\j\d\f\l\f\x\k\y\0\1\m\i\v\o\4\a\s\h\x\m\y\n\p\0\j\k\9\h\7\z\z\f\z\i\c\p\q\o\f\6\g\c\0\i\p\f\u\l\k\o\2\7\q\e\h\w\h\k\v\z\d\y\3\s\a\b\a\7\k\r\9\s\b\b\f\k\s\s\z\b\5\u\l\w\t\g\4\w\c\r\4\x\k\7\x\w\p\n\5\a\h\w\7\u\x\s\m\g\h\w\5\c\g\s\j\8\b\t\c\o\k\l\8\2\o\x\v\3\4\0\z\t\7\u\6\x\l\h\f\x\a\w\w\g\4\z\i\4\6\s\n\e\u\w\7\i\v\c\v\f\7\1\7\p\d\2\f\v\w\o\e\m\e\n\5\b\a\8\h\i\r\c\3\3\6\w\r\3\f\v\x\o\8\u\p\a\q\w\q\z\g\s\0\j\d\5\y\5\s\0\n\w\3\n\e\y\4\m\l\n\f\j\3\w\s\f\1\t\6\y\i\f\a\m\1\o\8\3\p\w\3\o\o\9\c\1\w\q\b\f\o\t\s\v\y\s\a\j\8\n\a\q\h\1\g\n\p\t\r\2\8\d\g\z\s\9\n\p\8\q\q\1\7\0\5\g\v\e\d\s\6\1\u\k\n\8\j\s\s\t\y\e\a\k\a\r\s\0\z\x\q\5\b\s\p\u\2\s\g\t\8\4\t\5\i\2\d\0\y\j\2\3\r\e\y\m\z\i\1 ]] 00:12:21.512 11:56:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:12:21.512 11:56:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:12:21.771 [2024-11-29 11:56:27.022582] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:21.771 [2024-11-29 11:56:27.022782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70736 ] 00:12:21.771 [2024-11-29 11:56:27.162852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.029 [2024-11-29 11:56:27.291289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.029  [2024-11-29T11:56:27.800Z] Copying: 512/512 [B] (average 500 kBps) 00:12:22.289 00:12:22.289 11:56:27 -- dd/posix.sh@93 -- # [[ lymyx1c1tza6uahax004v10k2sxuss3f95ktzp6evb8e7109s8c14py04gg1x7bvk7vvq3ix5dw8xx4p1pm8enxpj1bmpnidudg8b8zi92jr9uuzd7loq45krz8qnzggpd7l0oakv374qgucwuxbt1xyd9wk5ce1cx4t8x0rrnhdjjdflfxky01mivo4ashxmynp0jk9h7zzfzicpqof6gc0ipfulko27qehwhkvzdy3saba7kr9sbbfksszb5ulwtg4wcr4xk7xwpn5ahw7uxsmghw5cgsj8btcokl82oxv340zt7u6xlhfxawwg4zi46sneuw7ivcvf717pd2fvwoemen5ba8hirc336wr3fvxo8upaqwqzgs0jd5y5s0nw3ney4mlnfj3wsf1t6yifam1o83pw3oo9c1wqbfotsvysaj8naqh1gnptr28dgzs9np8qq1705gveds61ukn8jsstyeakars0zxq5bspu2sgt84t5i2d0yj23reymzi1 == \l\y\m\y\x\1\c\1\t\z\a\6\u\a\h\a\x\0\0\4\v\1\0\k\2\s\x\u\s\s\3\f\9\5\k\t\z\p\6\e\v\b\8\e\7\1\0\9\s\8\c\1\4\p\y\0\4\g\g\1\x\7\b\v\k\7\v\v\q\3\i\x\5\d\w\8\x\x\4\p\1\p\m\8\e\n\x\p\j\1\b\m\p\n\i\d\u\d\g\8\b\8\z\i\9\2\j\r\9\u\u\z\d\7\l\o\q\4\5\k\r\z\8\q\n\z\g\g\p\d\7\l\0\o\a\k\v\3\7\4\q\g\u\c\w\u\x\b\t\1\x\y\d\9\w\k\5\c\e\1\c\x\4\t\8\x\0\r\r\n\h\d\j\j\d\f\l\f\x\k\y\0\1\m\i\v\o\4\a\s\h\x\m\y\n\p\0\j\k\9\h\7\z\z\f\z\i\c\p\q\o\f\6\g\c\0\i\p\f\u\l\k\o\2\7\q\e\h\w\h\k\v\z\d\y\3\s\a\b\a\7\k\r\9\s\b\b\f\k\s\s\z\b\5\u\l\w\t\g\4\w\c\r\4\x\k\7\x\w\p\n\5\a\h\w\7\u\x\s\m\g\h\w\5\c\g\s\j\8\b\t\c\o\k\l\8\2\o\x\v\3\4\0\z\t\7\u\6\x\l\h\f\x\a\w\w\g\4\z\i\4\6\s\n\e\u\w\7\i\v\c\v\f\7\1\7\p\d\2\f\v\w\o\e\m\e\n\5\b\a\8\h\i\r\c\3\3\6\w\r\3\f\v\x\o\8\u\p\a\q\w\q\z\g\s\0\j\d\5\y\5\s\0\n\w\3\n\e\y\4\m\l\n\f\j\3\w\s\f\1\t\6\y\i\f\a\m\1\o\8\3\p\w\3\o\o\9\c\1\w\q\b\f\o\t\s\v\y\s\a\j\8\n\a\q\h\1\g\n\p\t\r\2\8\d\g\z\s\9\n\p\8\q\q\1\7\0\5\g\v\e\d\s\6\1\u\k\n\8\j\s\s\t\y\e\a\k\a\r\s\0\z\x\q\5\b\s\p\u\2\s\g\t\8\4\t\5\i\2\d\0\y\j\2\3\r\e\y\m\z\i\1 ]] 00:12:22.289 00:12:22.289 real 0m6.138s 00:12:22.289 user 0m3.616s 00:12:22.289 sys 0m1.540s 00:12:22.289 11:56:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:22.289 11:56:27 -- common/autotest_common.sh@10 -- # set +x 00:12:22.289 ************************************ 00:12:22.289 END TEST dd_flags_misc_forced_aio 00:12:22.289 ************************************ 00:12:22.289 11:56:27 -- dd/posix.sh@1 -- # cleanup 00:12:22.289 11:56:27 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:12:22.289 11:56:27 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:12:22.289 00:12:22.289 real 0m27.040s 00:12:22.289 user 0m14.555s 00:12:22.289 sys 0m6.649s 00:12:22.289 11:56:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:22.289 11:56:27 -- common/autotest_common.sh@10 -- # set +x 00:12:22.289 ************************************ 00:12:22.289 END TEST spdk_dd_posix 00:12:22.289 ************************************ 00:12:22.549 11:56:27 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:12:22.549 11:56:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:22.549 11:56:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:22.549 11:56:27 -- common/autotest_common.sh@10 -- # set +x 00:12:22.549 ************************************ 00:12:22.549 START TEST spdk_dd_malloc 00:12:22.549 ************************************ 00:12:22.549 11:56:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:12:22.549 * Looking for test storage... 00:12:22.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:22.549 11:56:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:22.549 11:56:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:22.549 11:56:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:22.549 11:56:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:22.549 11:56:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:22.549 11:56:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:22.549 11:56:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:22.549 11:56:28 -- scripts/common.sh@335 -- # IFS=.-: 00:12:22.549 11:56:28 -- scripts/common.sh@335 -- # read -ra ver1 00:12:22.549 11:56:28 -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.549 11:56:28 -- scripts/common.sh@336 -- # read -ra ver2 00:12:22.549 11:56:28 -- scripts/common.sh@337 -- # local 'op=<' 00:12:22.549 11:56:28 -- scripts/common.sh@339 -- # ver1_l=2 00:12:22.549 11:56:28 -- scripts/common.sh@340 -- # ver2_l=1 00:12:22.549 11:56:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:22.549 11:56:28 -- scripts/common.sh@343 -- # case "$op" in 00:12:22.549 11:56:28 -- scripts/common.sh@344 -- # : 1 00:12:22.549 11:56:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:22.549 11:56:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.549 11:56:28 -- scripts/common.sh@364 -- # decimal 1 00:12:22.549 11:56:28 -- scripts/common.sh@352 -- # local d=1 00:12:22.549 11:56:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.549 11:56:28 -- scripts/common.sh@354 -- # echo 1 00:12:22.549 11:56:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:22.549 11:56:28 -- scripts/common.sh@365 -- # decimal 2 00:12:22.549 11:56:28 -- scripts/common.sh@352 -- # local d=2 00:12:22.549 11:56:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.549 11:56:28 -- scripts/common.sh@354 -- # echo 2 00:12:22.549 11:56:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:22.549 11:56:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:22.549 11:56:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:22.549 11:56:28 -- scripts/common.sh@367 -- # return 0 00:12:22.549 11:56:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.549 11:56:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:22.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.549 --rc genhtml_branch_coverage=1 00:12:22.549 --rc genhtml_function_coverage=1 00:12:22.549 --rc genhtml_legend=1 00:12:22.549 --rc geninfo_all_blocks=1 00:12:22.549 --rc geninfo_unexecuted_blocks=1 00:12:22.549 00:12:22.549 ' 00:12:22.549 11:56:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:22.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.550 --rc genhtml_branch_coverage=1 00:12:22.550 --rc genhtml_function_coverage=1 00:12:22.550 --rc genhtml_legend=1 00:12:22.550 --rc geninfo_all_blocks=1 00:12:22.550 --rc geninfo_unexecuted_blocks=1 00:12:22.550 00:12:22.550 ' 00:12:22.550 11:56:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:22.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.550 --rc genhtml_branch_coverage=1 00:12:22.550 --rc genhtml_function_coverage=1 00:12:22.550 --rc genhtml_legend=1 00:12:22.550 --rc geninfo_all_blocks=1 00:12:22.550 --rc geninfo_unexecuted_blocks=1 00:12:22.550 00:12:22.550 ' 00:12:22.550 11:56:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:22.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.550 --rc genhtml_branch_coverage=1 00:12:22.550 --rc genhtml_function_coverage=1 00:12:22.550 --rc genhtml_legend=1 00:12:22.550 --rc geninfo_all_blocks=1 00:12:22.550 --rc geninfo_unexecuted_blocks=1 00:12:22.550 00:12:22.550 ' 00:12:22.550 11:56:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:22.550 11:56:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.550 11:56:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.550 11:56:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.550 11:56:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.550 11:56:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.550 11:56:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.550 11:56:28 -- paths/export.sh@5 -- # export PATH 00:12:22.550 11:56:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.550 11:56:28 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:12:22.550 11:56:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:22.550 11:56:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:22.550 11:56:28 -- common/autotest_common.sh@10 -- # set +x 00:12:22.550 ************************************ 00:12:22.550 START TEST dd_malloc_copy 00:12:22.550 ************************************ 00:12:22.550 11:56:28 -- common/autotest_common.sh@1114 -- # malloc_copy 00:12:22.550 11:56:28 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:12:22.550 11:56:28 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:12:22.550 11:56:28 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:12:22.550 11:56:28 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:12:22.550 11:56:28 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:12:22.550 11:56:28 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:12:22.550 11:56:28 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:12:22.550 11:56:28 -- dd/malloc.sh@28 -- # gen_conf 00:12:22.550 11:56:28 -- dd/common.sh@31 -- # xtrace_disable 00:12:22.550 11:56:28 -- common/autotest_common.sh@10 -- # set +x 00:12:22.809 [2024-11-29 11:56:28.096724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:22.809 [2024-11-29 11:56:28.096840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70817 ] 00:12:22.809 { 00:12:22.809 "subsystems": [ 00:12:22.809 { 00:12:22.809 "subsystem": "bdev", 00:12:22.809 "config": [ 00:12:22.809 { 00:12:22.809 "params": { 00:12:22.809 "block_size": 512, 00:12:22.809 "num_blocks": 1048576, 00:12:22.809 "name": "malloc0" 00:12:22.809 }, 00:12:22.809 "method": "bdev_malloc_create" 00:12:22.809 }, 00:12:22.809 { 00:12:22.809 "params": { 00:12:22.809 "block_size": 512, 00:12:22.809 "num_blocks": 1048576, 00:12:22.809 "name": "malloc1" 00:12:22.809 }, 00:12:22.809 "method": "bdev_malloc_create" 00:12:22.809 }, 00:12:22.809 { 00:12:22.809 "method": "bdev_wait_for_examine" 00:12:22.809 } 00:12:22.809 ] 00:12:22.809 } 00:12:22.809 ] 00:12:22.809 } 00:12:22.809 [2024-11-29 11:56:28.230645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.068 [2024-11-29 11:56:28.361370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.452  [2024-11-29T11:56:31.338Z] Copying: 196/512 [MB] (196 MBps) [2024-11-29T11:56:31.596Z] Copying: 392/512 [MB] (195 MBps) [2024-11-29T11:56:32.529Z] Copying: 512/512 [MB] (average 194 MBps) 00:12:27.018 00:12:27.018 11:56:32 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:12:27.018 11:56:32 -- dd/malloc.sh@33 -- # gen_conf 00:12:27.018 11:56:32 -- dd/common.sh@31 -- # xtrace_disable 00:12:27.018 11:56:32 -- common/autotest_common.sh@10 -- # set +x 00:12:27.018 [2024-11-29 11:56:32.463644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:27.018 [2024-11-29 11:56:32.463739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70870 ] 00:12:27.018 { 00:12:27.018 "subsystems": [ 00:12:27.018 { 00:12:27.018 "subsystem": "bdev", 00:12:27.018 "config": [ 00:12:27.018 { 00:12:27.018 "params": { 00:12:27.018 "block_size": 512, 00:12:27.018 "num_blocks": 1048576, 00:12:27.018 "name": "malloc0" 00:12:27.018 }, 00:12:27.018 "method": "bdev_malloc_create" 00:12:27.018 }, 00:12:27.018 { 00:12:27.018 "params": { 00:12:27.018 "block_size": 512, 00:12:27.018 "num_blocks": 1048576, 00:12:27.018 "name": "malloc1" 00:12:27.018 }, 00:12:27.018 "method": "bdev_malloc_create" 00:12:27.018 }, 00:12:27.018 { 00:12:27.018 "method": "bdev_wait_for_examine" 00:12:27.018 } 00:12:27.018 ] 00:12:27.018 } 00:12:27.018 ] 00:12:27.018 } 00:12:27.276 [2024-11-29 11:56:32.596585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.276 [2024-11-29 11:56:32.721369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.180  [2024-11-29T11:56:35.267Z] Copying: 199/512 [MB] (199 MBps) [2024-11-29T11:56:35.842Z] Copying: 401/512 [MB] (201 MBps) [2024-11-29T11:56:36.777Z] Copying: 512/512 [MB] (average 202 MBps) 00:12:31.266 00:12:31.266 00:12:31.266 real 0m8.580s 00:12:31.266 user 0m7.240s 00:12:31.266 sys 0m1.184s 00:12:31.266 11:56:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:31.266 11:56:36 -- common/autotest_common.sh@10 -- # set +x 00:12:31.266 ************************************ 00:12:31.266 END TEST dd_malloc_copy 00:12:31.266 ************************************ 00:12:31.266 00:12:31.266 real 0m8.827s 00:12:31.266 user 0m7.359s 00:12:31.266 sys 0m1.309s 00:12:31.266 11:56:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:31.266 11:56:36 -- common/autotest_common.sh@10 -- # set +x 00:12:31.266 ************************************ 00:12:31.266 END TEST spdk_dd_malloc 00:12:31.266 ************************************ 00:12:31.266 11:56:36 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:12:31.266 11:56:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:31.266 11:56:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.266 11:56:36 -- common/autotest_common.sh@10 -- # set +x 00:12:31.266 ************************************ 00:12:31.266 START TEST spdk_dd_bdev_to_bdev 00:12:31.266 ************************************ 00:12:31.266 11:56:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:12:31.525 * Looking for test storage... 00:12:31.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:31.525 11:56:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:31.525 11:56:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:31.525 11:56:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:31.525 11:56:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:31.525 11:56:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:31.525 11:56:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:31.525 11:56:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:31.525 11:56:36 -- scripts/common.sh@335 -- # IFS=.-: 00:12:31.525 11:56:36 -- scripts/common.sh@335 -- # read -ra ver1 00:12:31.525 11:56:36 -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.525 11:56:36 -- scripts/common.sh@336 -- # read -ra ver2 00:12:31.525 11:56:36 -- scripts/common.sh@337 -- # local 'op=<' 00:12:31.525 11:56:36 -- scripts/common.sh@339 -- # ver1_l=2 00:12:31.525 11:56:36 -- scripts/common.sh@340 -- # ver2_l=1 00:12:31.525 11:56:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:31.525 11:56:36 -- scripts/common.sh@343 -- # case "$op" in 00:12:31.525 11:56:36 -- scripts/common.sh@344 -- # : 1 00:12:31.525 11:56:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:31.525 11:56:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.525 11:56:36 -- scripts/common.sh@364 -- # decimal 1 00:12:31.525 11:56:36 -- scripts/common.sh@352 -- # local d=1 00:12:31.525 11:56:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.525 11:56:36 -- scripts/common.sh@354 -- # echo 1 00:12:31.525 11:56:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:31.525 11:56:36 -- scripts/common.sh@365 -- # decimal 2 00:12:31.525 11:56:36 -- scripts/common.sh@352 -- # local d=2 00:12:31.525 11:56:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.525 11:56:36 -- scripts/common.sh@354 -- # echo 2 00:12:31.525 11:56:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:31.525 11:56:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:31.525 11:56:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:31.525 11:56:36 -- scripts/common.sh@367 -- # return 0 00:12:31.525 11:56:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.525 11:56:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:31.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.525 --rc genhtml_branch_coverage=1 00:12:31.525 --rc genhtml_function_coverage=1 00:12:31.525 --rc genhtml_legend=1 00:12:31.525 --rc geninfo_all_blocks=1 00:12:31.525 --rc geninfo_unexecuted_blocks=1 00:12:31.525 00:12:31.525 ' 00:12:31.525 11:56:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:31.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.525 --rc genhtml_branch_coverage=1 00:12:31.525 --rc genhtml_function_coverage=1 00:12:31.525 --rc genhtml_legend=1 00:12:31.525 --rc geninfo_all_blocks=1 00:12:31.525 --rc geninfo_unexecuted_blocks=1 00:12:31.525 00:12:31.525 ' 00:12:31.525 11:56:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:31.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.525 --rc genhtml_branch_coverage=1 00:12:31.525 --rc genhtml_function_coverage=1 00:12:31.525 --rc genhtml_legend=1 00:12:31.525 --rc geninfo_all_blocks=1 00:12:31.525 --rc geninfo_unexecuted_blocks=1 00:12:31.525 00:12:31.525 ' 00:12:31.525 11:56:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:31.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.525 --rc genhtml_branch_coverage=1 00:12:31.525 --rc genhtml_function_coverage=1 00:12:31.525 --rc genhtml_legend=1 00:12:31.525 --rc geninfo_all_blocks=1 00:12:31.525 --rc geninfo_unexecuted_blocks=1 00:12:31.525 00:12:31.525 ' 00:12:31.525 11:56:36 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.525 11:56:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.525 11:56:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.525 11:56:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.525 11:56:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.525 11:56:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.525 11:56:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.525 11:56:36 -- paths/export.sh@5 -- # export PATH 00:12:31.526 11:56:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:12:31.526 11:56:36 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:12:31.526 11:56:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:31.526 11:56:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.526 11:56:36 -- common/autotest_common.sh@10 -- # set +x 00:12:31.526 ************************************ 00:12:31.526 START TEST dd_inflate_file 00:12:31.526 ************************************ 00:12:31.526 11:56:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:12:31.526 [2024-11-29 11:56:36.989178] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:31.526 [2024-11-29 11:56:36.989296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70997 ] 00:12:31.783 [2024-11-29 11:56:37.129095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.783 [2024-11-29 11:56:37.241188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.042  [2024-11-29T11:56:37.812Z] Copying: 64/64 [MB] (average 1391 MBps) 00:12:32.301 00:12:32.301 ************************************ 00:12:32.301 END TEST dd_inflate_file 00:12:32.301 ************************************ 00:12:32.301 00:12:32.301 real 0m0.772s 00:12:32.301 user 0m0.414s 00:12:32.301 sys 0m0.236s 00:12:32.301 11:56:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:32.301 11:56:37 -- common/autotest_common.sh@10 -- # set +x 00:12:32.301 11:56:37 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:12:32.301 11:56:37 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:12:32.301 11:56:37 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:12:32.301 11:56:37 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:12:32.301 11:56:37 -- dd/common.sh@31 -- # xtrace_disable 00:12:32.301 11:56:37 -- common/autotest_common.sh@10 -- # set +x 00:12:32.301 11:56:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:32.301 11:56:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:32.301 11:56:37 -- common/autotest_common.sh@10 -- # set +x 00:12:32.301 ************************************ 00:12:32.301 START TEST dd_copy_to_out_bdev 00:12:32.301 ************************************ 00:12:32.301 11:56:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:12:32.559 { 00:12:32.559 "subsystems": [ 00:12:32.559 { 00:12:32.559 "subsystem": "bdev", 00:12:32.559 "config": [ 00:12:32.559 { 00:12:32.559 "params": { 00:12:32.559 "trtype": "pcie", 00:12:32.559 "traddr": "0000:00:06.0", 00:12:32.559 "name": "Nvme0" 00:12:32.559 }, 00:12:32.559 "method": "bdev_nvme_attach_controller" 00:12:32.559 }, 00:12:32.559 { 00:12:32.559 "params": { 00:12:32.559 "trtype": "pcie", 00:12:32.559 "traddr": "0000:00:07.0", 00:12:32.559 "name": "Nvme1" 00:12:32.559 }, 00:12:32.559 "method": "bdev_nvme_attach_controller" 00:12:32.559 }, 00:12:32.559 { 00:12:32.559 "method": "bdev_wait_for_examine" 00:12:32.559 } 00:12:32.559 ] 00:12:32.559 } 00:12:32.559 ] 00:12:32.559 } 00:12:32.559 [2024-11-29 11:56:37.819528] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:32.559 [2024-11-29 11:56:37.819648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71030 ] 00:12:32.559 [2024-11-29 11:56:37.960535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.818 [2024-11-29 11:56:38.085600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.207  [2024-11-29T11:56:39.718Z] Copying: 51/64 [MB] (51 MBps) [2024-11-29T11:56:39.978Z] Copying: 64/64 [MB] (average 51 MBps) 00:12:34.467 00:12:34.467 ************************************ 00:12:34.467 END TEST dd_copy_to_out_bdev 00:12:34.467 ************************************ 00:12:34.467 00:12:34.467 real 0m2.188s 00:12:34.467 user 0m1.853s 00:12:34.467 sys 0m0.266s 00:12:34.467 11:56:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:34.467 11:56:39 -- common/autotest_common.sh@10 -- # set +x 00:12:34.738 11:56:40 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:12:34.738 11:56:40 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:12:34.738 11:56:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:34.738 11:56:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:34.738 11:56:40 -- common/autotest_common.sh@10 -- # set +x 00:12:34.738 ************************************ 00:12:34.738 START TEST dd_offset_magic 00:12:34.738 ************************************ 00:12:34.738 11:56:40 -- common/autotest_common.sh@1114 -- # offset_magic 00:12:34.738 11:56:40 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:12:34.738 11:56:40 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:12:34.738 11:56:40 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:12:34.738 11:56:40 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:12:34.738 11:56:40 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:12:34.738 11:56:40 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:12:34.738 11:56:40 -- dd/common.sh@31 -- # xtrace_disable 00:12:34.738 11:56:40 -- common/autotest_common.sh@10 -- # set +x 00:12:34.738 [2024-11-29 11:56:40.065711] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:34.738 [2024-11-29 11:56:40.066069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71074 ] 00:12:34.738 { 00:12:34.738 "subsystems": [ 00:12:34.738 { 00:12:34.738 "subsystem": "bdev", 00:12:34.738 "config": [ 00:12:34.738 { 00:12:34.738 "params": { 00:12:34.738 "trtype": "pcie", 00:12:34.738 "traddr": "0000:00:06.0", 00:12:34.738 "name": "Nvme0" 00:12:34.738 }, 00:12:34.738 "method": "bdev_nvme_attach_controller" 00:12:34.738 }, 00:12:34.738 { 00:12:34.738 "params": { 00:12:34.738 "trtype": "pcie", 00:12:34.738 "traddr": "0000:00:07.0", 00:12:34.738 "name": "Nvme1" 00:12:34.738 }, 00:12:34.738 "method": "bdev_nvme_attach_controller" 00:12:34.738 }, 00:12:34.738 { 00:12:34.738 "method": "bdev_wait_for_examine" 00:12:34.738 } 00:12:34.738 ] 00:12:34.738 } 00:12:34.738 ] 00:12:34.738 } 00:12:34.738 [2024-11-29 11:56:40.202800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.997 [2024-11-29 11:56:40.327045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.254  [2024-11-29T11:56:41.022Z] Copying: 65/65 [MB] (average 915 MBps) 00:12:35.511 00:12:35.511 11:56:40 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:12:35.511 11:56:40 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:12:35.511 11:56:40 -- dd/common.sh@31 -- # xtrace_disable 00:12:35.511 11:56:40 -- common/autotest_common.sh@10 -- # set +x 00:12:35.511 [2024-11-29 11:56:41.016838] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:35.512 [2024-11-29 11:56:41.017230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71094 ] 00:12:35.770 { 00:12:35.770 "subsystems": [ 00:12:35.770 { 00:12:35.770 "subsystem": "bdev", 00:12:35.770 "config": [ 00:12:35.770 { 00:12:35.770 "params": { 00:12:35.770 "trtype": "pcie", 00:12:35.770 "traddr": "0000:00:06.0", 00:12:35.770 "name": "Nvme0" 00:12:35.770 }, 00:12:35.770 "method": "bdev_nvme_attach_controller" 00:12:35.770 }, 00:12:35.770 { 00:12:35.770 "params": { 00:12:35.770 "trtype": "pcie", 00:12:35.770 "traddr": "0000:00:07.0", 00:12:35.770 "name": "Nvme1" 00:12:35.770 }, 00:12:35.770 "method": "bdev_nvme_attach_controller" 00:12:35.770 }, 00:12:35.770 { 00:12:35.770 "method": "bdev_wait_for_examine" 00:12:35.770 } 00:12:35.770 ] 00:12:35.770 } 00:12:35.770 ] 00:12:35.770 } 00:12:35.770 [2024-11-29 11:56:41.150132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.770 [2024-11-29 11:56:41.275224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.028  [2024-11-29T11:56:42.106Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:12:36.595 00:12:36.595 11:56:41 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:12:36.595 11:56:41 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:12:36.595 11:56:41 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:12:36.596 11:56:41 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:12:36.596 11:56:41 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:12:36.596 11:56:41 -- dd/common.sh@31 -- # xtrace_disable 00:12:36.596 11:56:41 -- common/autotest_common.sh@10 -- # set +x 00:12:36.596 [2024-11-29 11:56:41.887868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:36.596 [2024-11-29 11:56:41.887960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71114 ] 00:12:36.596 { 00:12:36.596 "subsystems": [ 00:12:36.596 { 00:12:36.596 "subsystem": "bdev", 00:12:36.596 "config": [ 00:12:36.596 { 00:12:36.596 "params": { 00:12:36.596 "trtype": "pcie", 00:12:36.596 "traddr": "0000:00:06.0", 00:12:36.596 "name": "Nvme0" 00:12:36.596 }, 00:12:36.596 "method": "bdev_nvme_attach_controller" 00:12:36.596 }, 00:12:36.596 { 00:12:36.596 "params": { 00:12:36.596 "trtype": "pcie", 00:12:36.596 "traddr": "0000:00:07.0", 00:12:36.596 "name": "Nvme1" 00:12:36.596 }, 00:12:36.596 "method": "bdev_nvme_attach_controller" 00:12:36.596 }, 00:12:36.596 { 00:12:36.596 "method": "bdev_wait_for_examine" 00:12:36.596 } 00:12:36.596 ] 00:12:36.596 } 00:12:36.596 ] 00:12:36.596 } 00:12:36.596 [2024-11-29 11:56:42.019956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.854 [2024-11-29 11:56:42.138805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.114  [2024-11-29T11:56:42.884Z] Copying: 65/65 [MB] (average 970 MBps) 00:12:37.373 00:12:37.373 11:56:42 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:12:37.373 11:56:42 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:12:37.373 11:56:42 -- dd/common.sh@31 -- # xtrace_disable 00:12:37.373 11:56:42 -- common/autotest_common.sh@10 -- # set +x 00:12:37.373 [2024-11-29 11:56:42.838858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:37.373 [2024-11-29 11:56:42.839876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71134 ] 00:12:37.373 { 00:12:37.373 "subsystems": [ 00:12:37.373 { 00:12:37.373 "subsystem": "bdev", 00:12:37.373 "config": [ 00:12:37.373 { 00:12:37.373 "params": { 00:12:37.373 "trtype": "pcie", 00:12:37.373 "traddr": "0000:00:06.0", 00:12:37.373 "name": "Nvme0" 00:12:37.373 }, 00:12:37.373 "method": "bdev_nvme_attach_controller" 00:12:37.373 }, 00:12:37.373 { 00:12:37.373 "params": { 00:12:37.373 "trtype": "pcie", 00:12:37.373 "traddr": "0000:00:07.0", 00:12:37.373 "name": "Nvme1" 00:12:37.373 }, 00:12:37.373 "method": "bdev_nvme_attach_controller" 00:12:37.373 }, 00:12:37.373 { 00:12:37.373 "method": "bdev_wait_for_examine" 00:12:37.373 } 00:12:37.373 ] 00:12:37.373 } 00:12:37.373 ] 00:12:37.373 } 00:12:37.632 [2024-11-29 11:56:42.980903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.632 [2024-11-29 11:56:43.102572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.891  [2024-11-29T11:56:43.970Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:12:38.459 00:12:38.459 ************************************ 00:12:38.459 END TEST dd_offset_magic 00:12:38.459 ************************************ 00:12:38.459 11:56:43 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:12:38.459 11:56:43 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:12:38.459 00:12:38.459 real 0m3.678s 00:12:38.459 user 0m2.626s 00:12:38.459 sys 0m0.842s 00:12:38.459 11:56:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:38.459 11:56:43 -- common/autotest_common.sh@10 -- # set +x 00:12:38.459 11:56:43 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:12:38.459 11:56:43 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:12:38.459 11:56:43 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:12:38.459 11:56:43 -- dd/common.sh@11 -- # local nvme_ref= 00:12:38.459 11:56:43 -- dd/common.sh@12 -- # local size=4194330 00:12:38.459 11:56:43 -- dd/common.sh@14 -- # local bs=1048576 00:12:38.459 11:56:43 -- dd/common.sh@15 -- # local count=5 00:12:38.459 11:56:43 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:12:38.459 11:56:43 -- dd/common.sh@18 -- # gen_conf 00:12:38.459 11:56:43 -- dd/common.sh@31 -- # xtrace_disable 00:12:38.459 11:56:43 -- common/autotest_common.sh@10 -- # set +x 00:12:38.459 [2024-11-29 11:56:43.786194] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:38.459 [2024-11-29 11:56:43.786316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71169 ] 00:12:38.459 { 00:12:38.459 "subsystems": [ 00:12:38.459 { 00:12:38.459 "subsystem": "bdev", 00:12:38.459 "config": [ 00:12:38.459 { 00:12:38.459 "params": { 00:12:38.459 "trtype": "pcie", 00:12:38.459 "traddr": "0000:00:06.0", 00:12:38.459 "name": "Nvme0" 00:12:38.459 }, 00:12:38.459 "method": "bdev_nvme_attach_controller" 00:12:38.459 }, 00:12:38.459 { 00:12:38.459 "params": { 00:12:38.459 "trtype": "pcie", 00:12:38.459 "traddr": "0000:00:07.0", 00:12:38.459 "name": "Nvme1" 00:12:38.459 }, 00:12:38.459 "method": "bdev_nvme_attach_controller" 00:12:38.459 }, 00:12:38.459 { 00:12:38.459 "method": "bdev_wait_for_examine" 00:12:38.459 } 00:12:38.459 ] 00:12:38.459 } 00:12:38.459 ] 00:12:38.459 } 00:12:38.459 [2024-11-29 11:56:43.926492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.718 [2024-11-29 11:56:44.056851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.977  [2024-11-29T11:56:44.747Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:12:39.236 00:12:39.236 11:56:44 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:12:39.236 11:56:44 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:12:39.236 11:56:44 -- dd/common.sh@11 -- # local nvme_ref= 00:12:39.236 11:56:44 -- dd/common.sh@12 -- # local size=4194330 00:12:39.236 11:56:44 -- dd/common.sh@14 -- # local bs=1048576 00:12:39.236 11:56:44 -- dd/common.sh@15 -- # local count=5 00:12:39.236 11:56:44 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:12:39.236 11:56:44 -- dd/common.sh@18 -- # gen_conf 00:12:39.236 11:56:44 -- dd/common.sh@31 -- # xtrace_disable 00:12:39.236 11:56:44 -- common/autotest_common.sh@10 -- # set +x 00:12:39.236 [2024-11-29 11:56:44.676042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:39.236 [2024-11-29 11:56:44.676341] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71183 ] 00:12:39.236 { 00:12:39.236 "subsystems": [ 00:12:39.236 { 00:12:39.236 "subsystem": "bdev", 00:12:39.236 "config": [ 00:12:39.236 { 00:12:39.236 "params": { 00:12:39.236 "trtype": "pcie", 00:12:39.236 "traddr": "0000:00:06.0", 00:12:39.236 "name": "Nvme0" 00:12:39.236 }, 00:12:39.236 "method": "bdev_nvme_attach_controller" 00:12:39.237 }, 00:12:39.237 { 00:12:39.237 "params": { 00:12:39.237 "trtype": "pcie", 00:12:39.237 "traddr": "0000:00:07.0", 00:12:39.237 "name": "Nvme1" 00:12:39.237 }, 00:12:39.237 "method": "bdev_nvme_attach_controller" 00:12:39.237 }, 00:12:39.237 { 00:12:39.237 "method": "bdev_wait_for_examine" 00:12:39.237 } 00:12:39.237 ] 00:12:39.237 } 00:12:39.237 ] 00:12:39.237 } 00:12:39.495 [2024-11-29 11:56:44.814526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.495 [2024-11-29 11:56:44.921464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.752  [2024-11-29T11:56:45.521Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:12:40.010 00:12:40.010 11:56:45 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:12:40.010 00:12:40.010 real 0m8.718s 00:12:40.010 user 0m6.205s 00:12:40.010 sys 0m1.994s 00:12:40.010 11:56:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:40.010 11:56:45 -- common/autotest_common.sh@10 -- # set +x 00:12:40.010 ************************************ 00:12:40.010 END TEST spdk_dd_bdev_to_bdev 00:12:40.010 ************************************ 00:12:40.010 11:56:45 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:12:40.010 11:56:45 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:12:40.010 11:56:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:40.010 11:56:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:40.010 11:56:45 -- common/autotest_common.sh@10 -- # set +x 00:12:40.010 ************************************ 00:12:40.010 START TEST spdk_dd_uring 00:12:40.010 ************************************ 00:12:40.010 11:56:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:12:40.270 * Looking for test storage... 00:12:40.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:40.270 11:56:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:40.270 11:56:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:40.270 11:56:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:40.270 11:56:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:40.270 11:56:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:40.270 11:56:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:40.270 11:56:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:40.270 11:56:45 -- scripts/common.sh@335 -- # IFS=.-: 00:12:40.270 11:56:45 -- scripts/common.sh@335 -- # read -ra ver1 00:12:40.270 11:56:45 -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.270 11:56:45 -- scripts/common.sh@336 -- # read -ra ver2 00:12:40.270 11:56:45 -- scripts/common.sh@337 -- # local 'op=<' 00:12:40.270 11:56:45 -- scripts/common.sh@339 -- # ver1_l=2 00:12:40.270 11:56:45 -- scripts/common.sh@340 -- # ver2_l=1 00:12:40.270 11:56:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:40.270 11:56:45 -- scripts/common.sh@343 -- # case "$op" in 00:12:40.270 11:56:45 -- scripts/common.sh@344 -- # : 1 00:12:40.270 11:56:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:40.270 11:56:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.270 11:56:45 -- scripts/common.sh@364 -- # decimal 1 00:12:40.270 11:56:45 -- scripts/common.sh@352 -- # local d=1 00:12:40.270 11:56:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.270 11:56:45 -- scripts/common.sh@354 -- # echo 1 00:12:40.270 11:56:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:40.270 11:56:45 -- scripts/common.sh@365 -- # decimal 2 00:12:40.270 11:56:45 -- scripts/common.sh@352 -- # local d=2 00:12:40.270 11:56:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.270 11:56:45 -- scripts/common.sh@354 -- # echo 2 00:12:40.270 11:56:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:40.270 11:56:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:40.270 11:56:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:40.270 11:56:45 -- scripts/common.sh@367 -- # return 0 00:12:40.270 11:56:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.270 11:56:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:40.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.270 --rc genhtml_branch_coverage=1 00:12:40.270 --rc genhtml_function_coverage=1 00:12:40.270 --rc genhtml_legend=1 00:12:40.270 --rc geninfo_all_blocks=1 00:12:40.270 --rc geninfo_unexecuted_blocks=1 00:12:40.270 00:12:40.270 ' 00:12:40.270 11:56:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:40.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.270 --rc genhtml_branch_coverage=1 00:12:40.270 --rc genhtml_function_coverage=1 00:12:40.270 --rc genhtml_legend=1 00:12:40.270 --rc geninfo_all_blocks=1 00:12:40.270 --rc geninfo_unexecuted_blocks=1 00:12:40.270 00:12:40.270 ' 00:12:40.270 11:56:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:40.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.270 --rc genhtml_branch_coverage=1 00:12:40.270 --rc genhtml_function_coverage=1 00:12:40.270 --rc genhtml_legend=1 00:12:40.270 --rc geninfo_all_blocks=1 00:12:40.270 --rc geninfo_unexecuted_blocks=1 00:12:40.270 00:12:40.270 ' 00:12:40.270 11:56:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:40.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.270 --rc genhtml_branch_coverage=1 00:12:40.270 --rc genhtml_function_coverage=1 00:12:40.270 --rc genhtml_legend=1 00:12:40.270 --rc geninfo_all_blocks=1 00:12:40.270 --rc geninfo_unexecuted_blocks=1 00:12:40.270 00:12:40.270 ' 00:12:40.270 11:56:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:40.270 11:56:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.270 11:56:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.270 11:56:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.270 11:56:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.270 11:56:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.270 11:56:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.270 11:56:45 -- paths/export.sh@5 -- # export PATH 00:12:40.270 11:56:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.270 11:56:45 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:12:40.270 11:56:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:40.270 11:56:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:40.270 11:56:45 -- common/autotest_common.sh@10 -- # set +x 00:12:40.270 ************************************ 00:12:40.270 START TEST dd_uring_copy 00:12:40.270 ************************************ 00:12:40.270 11:56:45 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:12:40.270 11:56:45 -- dd/uring.sh@15 -- # local zram_dev_id 00:12:40.270 11:56:45 -- dd/uring.sh@16 -- # local magic 00:12:40.270 11:56:45 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:12:40.270 11:56:45 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:12:40.270 11:56:45 -- dd/uring.sh@19 -- # local verify_magic 00:12:40.270 11:56:45 -- dd/uring.sh@21 -- # init_zram 00:12:40.270 11:56:45 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:12:40.270 11:56:45 -- dd/common.sh@164 -- # return 00:12:40.270 11:56:45 -- dd/uring.sh@22 -- # create_zram_dev 00:12:40.270 11:56:45 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:12:40.270 11:56:45 -- dd/uring.sh@22 -- # zram_dev_id=1 00:12:40.270 11:56:45 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:12:40.270 11:56:45 -- dd/common.sh@181 -- # local id=1 00:12:40.270 11:56:45 -- dd/common.sh@182 -- # local size=512M 00:12:40.270 11:56:45 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:12:40.270 11:56:45 -- dd/common.sh@186 -- # echo 512M 00:12:40.270 11:56:45 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:12:40.271 11:56:45 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:12:40.271 11:56:45 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:12:40.271 11:56:45 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:12:40.271 11:56:45 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:12:40.271 11:56:45 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:12:40.271 11:56:45 -- dd/uring.sh@41 -- # gen_bytes 1024 00:12:40.271 11:56:45 -- dd/common.sh@98 -- # xtrace_disable 00:12:40.271 11:56:45 -- common/autotest_common.sh@10 -- # set +x 00:12:40.271 11:56:45 -- dd/uring.sh@41 -- # magic=9m32jbjwtkskvpggekogviwwjywd3fpgpswntktbvgk95ng9aulljj9hmg5bf0stp56wungr63fge0rpqfswb0u3zkyda3h695bw04z24yrlvhbee3id6fzcogl0x2o3yfj9rpu7wfvre9k1k5l59t1gnc1luskbw18rflntnvsg31z6fnn3600y5lgksvirlr904ibilxevb04e9gugbs2s8abkr2hv3ks0umvpabi3qtrpdlf3xcredgpgc5698kbkqhuerc5oe9xd821umq0bdbe5l7x6vtb36fwmbhya1sc5njbpvckk2msnx655a0w6r5pate9wfk8laglq3nphdbf6p3l4kolzplwlkb61fso417fen6dazzy2vaem2fkbninxcz82lq1dp5ve7irivsqsn7zbmvc8mjcewdczog5rcq9twecq6h2qq24mmdfurq6btbh5bw91suw0wdb7snmouumd7ki9xca3p99zhimnhjy6yngfpwuv3yivv4lle5smkz4gd1lv75acvc6nmtirzcmyw5zs8ululm58c0brhs93cfiacvzbgpbx6c8q6mleklicn4rw4qbb60115lq6zub01sxzeqic8lyzihcnj2ig8zdjqy4ng4ijehcqalcco915v15a4qgyj6bc75q13p596t05i23cg43qisn6mxpnnf4jew9stk0bq2egr9pyddzy8vocke1lzvt37w8nh54slz5oip5l1wruyyoklwodhcnlek5tquqnkhbwy8zfrgldeqolae6ijfkmizlrgba0rc0dyba0pivkhidpyc2xeygthfumqybl6b2p62hl1qlxwblfy76y573xolmlko8b59yz3rgm92d6qicuwqosrucnrxxais6uxwab6whjh83kcd1ls4fb8khiw4hvy6bp9cs9pabywd6ic3yi1cinlz1r20ezzjpo6r2nkkso67qnd2mwzwh7b7sh3k35kp9pgqxarq0xa9e5etoel6vt4c5elk6olwru 00:12:40.271 11:56:45 -- dd/uring.sh@42 -- # echo 9m32jbjwtkskvpggekogviwwjywd3fpgpswntktbvgk95ng9aulljj9hmg5bf0stp56wungr63fge0rpqfswb0u3zkyda3h695bw04z24yrlvhbee3id6fzcogl0x2o3yfj9rpu7wfvre9k1k5l59t1gnc1luskbw18rflntnvsg31z6fnn3600y5lgksvirlr904ibilxevb04e9gugbs2s8abkr2hv3ks0umvpabi3qtrpdlf3xcredgpgc5698kbkqhuerc5oe9xd821umq0bdbe5l7x6vtb36fwmbhya1sc5njbpvckk2msnx655a0w6r5pate9wfk8laglq3nphdbf6p3l4kolzplwlkb61fso417fen6dazzy2vaem2fkbninxcz82lq1dp5ve7irivsqsn7zbmvc8mjcewdczog5rcq9twecq6h2qq24mmdfurq6btbh5bw91suw0wdb7snmouumd7ki9xca3p99zhimnhjy6yngfpwuv3yivv4lle5smkz4gd1lv75acvc6nmtirzcmyw5zs8ululm58c0brhs93cfiacvzbgpbx6c8q6mleklicn4rw4qbb60115lq6zub01sxzeqic8lyzihcnj2ig8zdjqy4ng4ijehcqalcco915v15a4qgyj6bc75q13p596t05i23cg43qisn6mxpnnf4jew9stk0bq2egr9pyddzy8vocke1lzvt37w8nh54slz5oip5l1wruyyoklwodhcnlek5tquqnkhbwy8zfrgldeqolae6ijfkmizlrgba0rc0dyba0pivkhidpyc2xeygthfumqybl6b2p62hl1qlxwblfy76y573xolmlko8b59yz3rgm92d6qicuwqosrucnrxxais6uxwab6whjh83kcd1ls4fb8khiw4hvy6bp9cs9pabywd6ic3yi1cinlz1r20ezzjpo6r2nkkso67qnd2mwzwh7b7sh3k35kp9pgqxarq0xa9e5etoel6vt4c5elk6olwru 00:12:40.271 11:56:45 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:12:40.530 [2024-11-29 11:56:45.798558] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:40.530 [2024-11-29 11:56:45.798938] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71259 ] 00:12:40.530 [2024-11-29 11:56:45.950797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.530 [2024-11-29 11:56:46.022490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.466  [2024-11-29T11:56:47.236Z] Copying: 511/511 [MB] (average 1376 MBps) 00:12:41.725 00:12:41.725 11:56:47 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:12:41.725 11:56:47 -- dd/uring.sh@54 -- # gen_conf 00:12:41.725 11:56:47 -- dd/common.sh@31 -- # xtrace_disable 00:12:41.725 11:56:47 -- common/autotest_common.sh@10 -- # set +x 00:12:41.725 [2024-11-29 11:56:47.121667] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:41.725 [2024-11-29 11:56:47.121803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71279 ] 00:12:41.725 { 00:12:41.725 "subsystems": [ 00:12:41.725 { 00:12:41.725 "subsystem": "bdev", 00:12:41.725 "config": [ 00:12:41.725 { 00:12:41.725 "params": { 00:12:41.725 "block_size": 512, 00:12:41.725 "num_blocks": 1048576, 00:12:41.725 "name": "malloc0" 00:12:41.725 }, 00:12:41.725 "method": "bdev_malloc_create" 00:12:41.725 }, 00:12:41.725 { 00:12:41.725 "params": { 00:12:41.725 "filename": "/dev/zram1", 00:12:41.725 "name": "uring0" 00:12:41.725 }, 00:12:41.725 "method": "bdev_uring_create" 00:12:41.725 }, 00:12:41.725 { 00:12:41.725 "method": "bdev_wait_for_examine" 00:12:41.725 } 00:12:41.725 ] 00:12:41.725 } 00:12:41.725 ] 00:12:41.725 } 00:12:41.984 [2024-11-29 11:56:47.260591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.984 [2024-11-29 11:56:47.357587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.362  [2024-11-29T11:56:49.808Z] Copying: 167/512 [MB] (167 MBps) [2024-11-29T11:56:50.745Z] Copying: 344/512 [MB] (177 MBps) [2024-11-29T11:56:51.313Z] Copying: 512/512 [MB] (average 175 MBps) 00:12:45.802 00:12:45.802 11:56:51 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:12:45.802 11:56:51 -- dd/uring.sh@60 -- # gen_conf 00:12:45.802 11:56:51 -- dd/common.sh@31 -- # xtrace_disable 00:12:45.802 11:56:51 -- common/autotest_common.sh@10 -- # set +x 00:12:45.802 [2024-11-29 11:56:51.082075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:45.802 [2024-11-29 11:56:51.082196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71333 ] 00:12:45.802 { 00:12:45.802 "subsystems": [ 00:12:45.802 { 00:12:45.802 "subsystem": "bdev", 00:12:45.802 "config": [ 00:12:45.802 { 00:12:45.802 "params": { 00:12:45.802 "block_size": 512, 00:12:45.802 "num_blocks": 1048576, 00:12:45.802 "name": "malloc0" 00:12:45.802 }, 00:12:45.802 "method": "bdev_malloc_create" 00:12:45.802 }, 00:12:45.802 { 00:12:45.802 "params": { 00:12:45.802 "filename": "/dev/zram1", 00:12:45.802 "name": "uring0" 00:12:45.802 }, 00:12:45.802 "method": "bdev_uring_create" 00:12:45.802 }, 00:12:45.802 { 00:12:45.802 "method": "bdev_wait_for_examine" 00:12:45.802 } 00:12:45.802 ] 00:12:45.802 } 00:12:45.802 ] 00:12:45.802 } 00:12:45.802 [2024-11-29 11:56:51.221198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.802 [2024-11-29 11:56:51.303054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.209  [2024-11-29T11:56:53.655Z] Copying: 138/512 [MB] (138 MBps) [2024-11-29T11:56:54.591Z] Copying: 266/512 [MB] (128 MBps) [2024-11-29T11:56:55.528Z] Copying: 402/512 [MB] (136 MBps) [2024-11-29T11:56:56.096Z] Copying: 512/512 [MB] (average 132 MBps) 00:12:50.585 00:12:50.585 11:56:55 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:12:50.585 11:56:55 -- dd/uring.sh@66 -- # [[ 9m32jbjwtkskvpggekogviwwjywd3fpgpswntktbvgk95ng9aulljj9hmg5bf0stp56wungr63fge0rpqfswb0u3zkyda3h695bw04z24yrlvhbee3id6fzcogl0x2o3yfj9rpu7wfvre9k1k5l59t1gnc1luskbw18rflntnvsg31z6fnn3600y5lgksvirlr904ibilxevb04e9gugbs2s8abkr2hv3ks0umvpabi3qtrpdlf3xcredgpgc5698kbkqhuerc5oe9xd821umq0bdbe5l7x6vtb36fwmbhya1sc5njbpvckk2msnx655a0w6r5pate9wfk8laglq3nphdbf6p3l4kolzplwlkb61fso417fen6dazzy2vaem2fkbninxcz82lq1dp5ve7irivsqsn7zbmvc8mjcewdczog5rcq9twecq6h2qq24mmdfurq6btbh5bw91suw0wdb7snmouumd7ki9xca3p99zhimnhjy6yngfpwuv3yivv4lle5smkz4gd1lv75acvc6nmtirzcmyw5zs8ululm58c0brhs93cfiacvzbgpbx6c8q6mleklicn4rw4qbb60115lq6zub01sxzeqic8lyzihcnj2ig8zdjqy4ng4ijehcqalcco915v15a4qgyj6bc75q13p596t05i23cg43qisn6mxpnnf4jew9stk0bq2egr9pyddzy8vocke1lzvt37w8nh54slz5oip5l1wruyyoklwodhcnlek5tquqnkhbwy8zfrgldeqolae6ijfkmizlrgba0rc0dyba0pivkhidpyc2xeygthfumqybl6b2p62hl1qlxwblfy76y573xolmlko8b59yz3rgm92d6qicuwqosrucnrxxais6uxwab6whjh83kcd1ls4fb8khiw4hvy6bp9cs9pabywd6ic3yi1cinlz1r20ezzjpo6r2nkkso67qnd2mwzwh7b7sh3k35kp9pgqxarq0xa9e5etoel6vt4c5elk6olwru == \9\m\3\2\j\b\j\w\t\k\s\k\v\p\g\g\e\k\o\g\v\i\w\w\j\y\w\d\3\f\p\g\p\s\w\n\t\k\t\b\v\g\k\9\5\n\g\9\a\u\l\l\j\j\9\h\m\g\5\b\f\0\s\t\p\5\6\w\u\n\g\r\6\3\f\g\e\0\r\p\q\f\s\w\b\0\u\3\z\k\y\d\a\3\h\6\9\5\b\w\0\4\z\2\4\y\r\l\v\h\b\e\e\3\i\d\6\f\z\c\o\g\l\0\x\2\o\3\y\f\j\9\r\p\u\7\w\f\v\r\e\9\k\1\k\5\l\5\9\t\1\g\n\c\1\l\u\s\k\b\w\1\8\r\f\l\n\t\n\v\s\g\3\1\z\6\f\n\n\3\6\0\0\y\5\l\g\k\s\v\i\r\l\r\9\0\4\i\b\i\l\x\e\v\b\0\4\e\9\g\u\g\b\s\2\s\8\a\b\k\r\2\h\v\3\k\s\0\u\m\v\p\a\b\i\3\q\t\r\p\d\l\f\3\x\c\r\e\d\g\p\g\c\5\6\9\8\k\b\k\q\h\u\e\r\c\5\o\e\9\x\d\8\2\1\u\m\q\0\b\d\b\e\5\l\7\x\6\v\t\b\3\6\f\w\m\b\h\y\a\1\s\c\5\n\j\b\p\v\c\k\k\2\m\s\n\x\6\5\5\a\0\w\6\r\5\p\a\t\e\9\w\f\k\8\l\a\g\l\q\3\n\p\h\d\b\f\6\p\3\l\4\k\o\l\z\p\l\w\l\k\b\6\1\f\s\o\4\1\7\f\e\n\6\d\a\z\z\y\2\v\a\e\m\2\f\k\b\n\i\n\x\c\z\8\2\l\q\1\d\p\5\v\e\7\i\r\i\v\s\q\s\n\7\z\b\m\v\c\8\m\j\c\e\w\d\c\z\o\g\5\r\c\q\9\t\w\e\c\q\6\h\2\q\q\2\4\m\m\d\f\u\r\q\6\b\t\b\h\5\b\w\9\1\s\u\w\0\w\d\b\7\s\n\m\o\u\u\m\d\7\k\i\9\x\c\a\3\p\9\9\z\h\i\m\n\h\j\y\6\y\n\g\f\p\w\u\v\3\y\i\v\v\4\l\l\e\5\s\m\k\z\4\g\d\1\l\v\7\5\a\c\v\c\6\n\m\t\i\r\z\c\m\y\w\5\z\s\8\u\l\u\l\m\5\8\c\0\b\r\h\s\9\3\c\f\i\a\c\v\z\b\g\p\b\x\6\c\8\q\6\m\l\e\k\l\i\c\n\4\r\w\4\q\b\b\6\0\1\1\5\l\q\6\z\u\b\0\1\s\x\z\e\q\i\c\8\l\y\z\i\h\c\n\j\2\i\g\8\z\d\j\q\y\4\n\g\4\i\j\e\h\c\q\a\l\c\c\o\9\1\5\v\1\5\a\4\q\g\y\j\6\b\c\7\5\q\1\3\p\5\9\6\t\0\5\i\2\3\c\g\4\3\q\i\s\n\6\m\x\p\n\n\f\4\j\e\w\9\s\t\k\0\b\q\2\e\g\r\9\p\y\d\d\z\y\8\v\o\c\k\e\1\l\z\v\t\3\7\w\8\n\h\5\4\s\l\z\5\o\i\p\5\l\1\w\r\u\y\y\o\k\l\w\o\d\h\c\n\l\e\k\5\t\q\u\q\n\k\h\b\w\y\8\z\f\r\g\l\d\e\q\o\l\a\e\6\i\j\f\k\m\i\z\l\r\g\b\a\0\r\c\0\d\y\b\a\0\p\i\v\k\h\i\d\p\y\c\2\x\e\y\g\t\h\f\u\m\q\y\b\l\6\b\2\p\6\2\h\l\1\q\l\x\w\b\l\f\y\7\6\y\5\7\3\x\o\l\m\l\k\o\8\b\5\9\y\z\3\r\g\m\9\2\d\6\q\i\c\u\w\q\o\s\r\u\c\n\r\x\x\a\i\s\6\u\x\w\a\b\6\w\h\j\h\8\3\k\c\d\1\l\s\4\f\b\8\k\h\i\w\4\h\v\y\6\b\p\9\c\s\9\p\a\b\y\w\d\6\i\c\3\y\i\1\c\i\n\l\z\1\r\2\0\e\z\z\j\p\o\6\r\2\n\k\k\s\o\6\7\q\n\d\2\m\w\z\w\h\7\b\7\s\h\3\k\3\5\k\p\9\p\g\q\x\a\r\q\0\x\a\9\e\5\e\t\o\e\l\6\v\t\4\c\5\e\l\k\6\o\l\w\r\u ]] 00:12:50.585 11:56:55 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:12:50.585 11:56:55 -- dd/uring.sh@69 -- # [[ 9m32jbjwtkskvpggekogviwwjywd3fpgpswntktbvgk95ng9aulljj9hmg5bf0stp56wungr63fge0rpqfswb0u3zkyda3h695bw04z24yrlvhbee3id6fzcogl0x2o3yfj9rpu7wfvre9k1k5l59t1gnc1luskbw18rflntnvsg31z6fnn3600y5lgksvirlr904ibilxevb04e9gugbs2s8abkr2hv3ks0umvpabi3qtrpdlf3xcredgpgc5698kbkqhuerc5oe9xd821umq0bdbe5l7x6vtb36fwmbhya1sc5njbpvckk2msnx655a0w6r5pate9wfk8laglq3nphdbf6p3l4kolzplwlkb61fso417fen6dazzy2vaem2fkbninxcz82lq1dp5ve7irivsqsn7zbmvc8mjcewdczog5rcq9twecq6h2qq24mmdfurq6btbh5bw91suw0wdb7snmouumd7ki9xca3p99zhimnhjy6yngfpwuv3yivv4lle5smkz4gd1lv75acvc6nmtirzcmyw5zs8ululm58c0brhs93cfiacvzbgpbx6c8q6mleklicn4rw4qbb60115lq6zub01sxzeqic8lyzihcnj2ig8zdjqy4ng4ijehcqalcco915v15a4qgyj6bc75q13p596t05i23cg43qisn6mxpnnf4jew9stk0bq2egr9pyddzy8vocke1lzvt37w8nh54slz5oip5l1wruyyoklwodhcnlek5tquqnkhbwy8zfrgldeqolae6ijfkmizlrgba0rc0dyba0pivkhidpyc2xeygthfumqybl6b2p62hl1qlxwblfy76y573xolmlko8b59yz3rgm92d6qicuwqosrucnrxxais6uxwab6whjh83kcd1ls4fb8khiw4hvy6bp9cs9pabywd6ic3yi1cinlz1r20ezzjpo6r2nkkso67qnd2mwzwh7b7sh3k35kp9pgqxarq0xa9e5etoel6vt4c5elk6olwru == \9\m\3\2\j\b\j\w\t\k\s\k\v\p\g\g\e\k\o\g\v\i\w\w\j\y\w\d\3\f\p\g\p\s\w\n\t\k\t\b\v\g\k\9\5\n\g\9\a\u\l\l\j\j\9\h\m\g\5\b\f\0\s\t\p\5\6\w\u\n\g\r\6\3\f\g\e\0\r\p\q\f\s\w\b\0\u\3\z\k\y\d\a\3\h\6\9\5\b\w\0\4\z\2\4\y\r\l\v\h\b\e\e\3\i\d\6\f\z\c\o\g\l\0\x\2\o\3\y\f\j\9\r\p\u\7\w\f\v\r\e\9\k\1\k\5\l\5\9\t\1\g\n\c\1\l\u\s\k\b\w\1\8\r\f\l\n\t\n\v\s\g\3\1\z\6\f\n\n\3\6\0\0\y\5\l\g\k\s\v\i\r\l\r\9\0\4\i\b\i\l\x\e\v\b\0\4\e\9\g\u\g\b\s\2\s\8\a\b\k\r\2\h\v\3\k\s\0\u\m\v\p\a\b\i\3\q\t\r\p\d\l\f\3\x\c\r\e\d\g\p\g\c\5\6\9\8\k\b\k\q\h\u\e\r\c\5\o\e\9\x\d\8\2\1\u\m\q\0\b\d\b\e\5\l\7\x\6\v\t\b\3\6\f\w\m\b\h\y\a\1\s\c\5\n\j\b\p\v\c\k\k\2\m\s\n\x\6\5\5\a\0\w\6\r\5\p\a\t\e\9\w\f\k\8\l\a\g\l\q\3\n\p\h\d\b\f\6\p\3\l\4\k\o\l\z\p\l\w\l\k\b\6\1\f\s\o\4\1\7\f\e\n\6\d\a\z\z\y\2\v\a\e\m\2\f\k\b\n\i\n\x\c\z\8\2\l\q\1\d\p\5\v\e\7\i\r\i\v\s\q\s\n\7\z\b\m\v\c\8\m\j\c\e\w\d\c\z\o\g\5\r\c\q\9\t\w\e\c\q\6\h\2\q\q\2\4\m\m\d\f\u\r\q\6\b\t\b\h\5\b\w\9\1\s\u\w\0\w\d\b\7\s\n\m\o\u\u\m\d\7\k\i\9\x\c\a\3\p\9\9\z\h\i\m\n\h\j\y\6\y\n\g\f\p\w\u\v\3\y\i\v\v\4\l\l\e\5\s\m\k\z\4\g\d\1\l\v\7\5\a\c\v\c\6\n\m\t\i\r\z\c\m\y\w\5\z\s\8\u\l\u\l\m\5\8\c\0\b\r\h\s\9\3\c\f\i\a\c\v\z\b\g\p\b\x\6\c\8\q\6\m\l\e\k\l\i\c\n\4\r\w\4\q\b\b\6\0\1\1\5\l\q\6\z\u\b\0\1\s\x\z\e\q\i\c\8\l\y\z\i\h\c\n\j\2\i\g\8\z\d\j\q\y\4\n\g\4\i\j\e\h\c\q\a\l\c\c\o\9\1\5\v\1\5\a\4\q\g\y\j\6\b\c\7\5\q\1\3\p\5\9\6\t\0\5\i\2\3\c\g\4\3\q\i\s\n\6\m\x\p\n\n\f\4\j\e\w\9\s\t\k\0\b\q\2\e\g\r\9\p\y\d\d\z\y\8\v\o\c\k\e\1\l\z\v\t\3\7\w\8\n\h\5\4\s\l\z\5\o\i\p\5\l\1\w\r\u\y\y\o\k\l\w\o\d\h\c\n\l\e\k\5\t\q\u\q\n\k\h\b\w\y\8\z\f\r\g\l\d\e\q\o\l\a\e\6\i\j\f\k\m\i\z\l\r\g\b\a\0\r\c\0\d\y\b\a\0\p\i\v\k\h\i\d\p\y\c\2\x\e\y\g\t\h\f\u\m\q\y\b\l\6\b\2\p\6\2\h\l\1\q\l\x\w\b\l\f\y\7\6\y\5\7\3\x\o\l\m\l\k\o\8\b\5\9\y\z\3\r\g\m\9\2\d\6\q\i\c\u\w\q\o\s\r\u\c\n\r\x\x\a\i\s\6\u\x\w\a\b\6\w\h\j\h\8\3\k\c\d\1\l\s\4\f\b\8\k\h\i\w\4\h\v\y\6\b\p\9\c\s\9\p\a\b\y\w\d\6\i\c\3\y\i\1\c\i\n\l\z\1\r\2\0\e\z\z\j\p\o\6\r\2\n\k\k\s\o\6\7\q\n\d\2\m\w\z\w\h\7\b\7\s\h\3\k\3\5\k\p\9\p\g\q\x\a\r\q\0\x\a\9\e\5\e\t\o\e\l\6\v\t\4\c\5\e\l\k\6\o\l\w\r\u ]] 00:12:50.585 11:56:55 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:12:50.844 11:56:56 -- dd/uring.sh@75 -- # gen_conf 00:12:50.844 11:56:56 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:12:50.844 11:56:56 -- dd/common.sh@31 -- # xtrace_disable 00:12:50.844 11:56:56 -- common/autotest_common.sh@10 -- # set +x 00:12:50.844 [2024-11-29 11:56:56.282498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:50.844 [2024-11-29 11:56:56.282664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71436 ] 00:12:50.844 { 00:12:50.844 "subsystems": [ 00:12:50.844 { 00:12:50.844 "subsystem": "bdev", 00:12:50.844 "config": [ 00:12:50.844 { 00:12:50.844 "params": { 00:12:50.844 "block_size": 512, 00:12:50.844 "num_blocks": 1048576, 00:12:50.844 "name": "malloc0" 00:12:50.844 }, 00:12:50.844 "method": "bdev_malloc_create" 00:12:50.844 }, 00:12:50.844 { 00:12:50.844 "params": { 00:12:50.844 "filename": "/dev/zram1", 00:12:50.844 "name": "uring0" 00:12:50.844 }, 00:12:50.844 "method": "bdev_uring_create" 00:12:50.844 }, 00:12:50.844 { 00:12:50.844 "method": "bdev_wait_for_examine" 00:12:50.844 } 00:12:50.844 ] 00:12:50.844 } 00:12:50.844 ] 00:12:50.844 } 00:12:51.103 [2024-11-29 11:56:56.422458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.103 [2024-11-29 11:56:56.520682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.478  [2024-11-29T11:56:58.925Z] Copying: 147/512 [MB] (147 MBps) [2024-11-29T11:56:59.862Z] Copying: 295/512 [MB] (147 MBps) [2024-11-29T11:57:00.429Z] Copying: 442/512 [MB] (147 MBps) [2024-11-29T11:57:00.688Z] Copying: 512/512 [MB] (average 147 MBps) 00:12:55.177 00:12:55.177 11:57:00 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:12:55.177 11:57:00 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:12:55.177 11:57:00 -- dd/uring.sh@87 -- # : 00:12:55.177 11:57:00 -- dd/uring.sh@87 -- # : 00:12:55.177 11:57:00 -- dd/uring.sh@87 -- # gen_conf 00:12:55.177 11:57:00 -- dd/common.sh@31 -- # xtrace_disable 00:12:55.177 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:12:55.177 11:57:00 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:12:55.437 [2024-11-29 11:57:00.700722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:55.437 [2024-11-29 11:57:00.701104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71493 ] 00:12:55.437 { 00:12:55.437 "subsystems": [ 00:12:55.437 { 00:12:55.437 "subsystem": "bdev", 00:12:55.437 "config": [ 00:12:55.437 { 00:12:55.437 "params": { 00:12:55.437 "block_size": 512, 00:12:55.437 "num_blocks": 1048576, 00:12:55.437 "name": "malloc0" 00:12:55.437 }, 00:12:55.437 "method": "bdev_malloc_create" 00:12:55.437 }, 00:12:55.437 { 00:12:55.437 "params": { 00:12:55.437 "filename": "/dev/zram1", 00:12:55.437 "name": "uring0" 00:12:55.437 }, 00:12:55.437 "method": "bdev_uring_create" 00:12:55.437 }, 00:12:55.437 { 00:12:55.437 "params": { 00:12:55.437 "name": "uring0" 00:12:55.437 }, 00:12:55.437 "method": "bdev_uring_delete" 00:12:55.437 }, 00:12:55.437 { 00:12:55.437 "method": "bdev_wait_for_examine" 00:12:55.437 } 00:12:55.437 ] 00:12:55.437 } 00:12:55.437 ] 00:12:55.437 } 00:12:55.437 [2024-11-29 11:57:00.840344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.695 [2024-11-29 11:57:00.961576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.954  [2024-11-29T11:57:02.033Z] Copying: 0/0 [B] (average 0 Bps) 00:12:56.522 00:12:56.522 11:57:01 -- dd/uring.sh@94 -- # : 00:12:56.522 11:57:01 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:56.522 11:57:01 -- dd/uring.sh@94 -- # gen_conf 00:12:56.522 11:57:01 -- common/autotest_common.sh@650 -- # local es=0 00:12:56.522 11:57:01 -- dd/common.sh@31 -- # xtrace_disable 00:12:56.522 11:57:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:56.522 11:57:01 -- common/autotest_common.sh@10 -- # set +x 00:12:56.522 11:57:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:56.522 11:57:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:56.522 11:57:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:56.522 11:57:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:56.522 11:57:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:56.522 11:57:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:56.522 11:57:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:56.522 11:57:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:56.522 11:57:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:56.522 [2024-11-29 11:57:01.941813] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:56.522 [2024-11-29 11:57:01.941933] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71527 ] 00:12:56.522 { 00:12:56.522 "subsystems": [ 00:12:56.522 { 00:12:56.522 "subsystem": "bdev", 00:12:56.522 "config": [ 00:12:56.522 { 00:12:56.522 "params": { 00:12:56.522 "block_size": 512, 00:12:56.522 "num_blocks": 1048576, 00:12:56.522 "name": "malloc0" 00:12:56.522 }, 00:12:56.522 "method": "bdev_malloc_create" 00:12:56.522 }, 00:12:56.522 { 00:12:56.522 "params": { 00:12:56.522 "filename": "/dev/zram1", 00:12:56.522 "name": "uring0" 00:12:56.522 }, 00:12:56.522 "method": "bdev_uring_create" 00:12:56.522 }, 00:12:56.522 { 00:12:56.522 "params": { 00:12:56.522 "name": "uring0" 00:12:56.522 }, 00:12:56.522 "method": "bdev_uring_delete" 00:12:56.522 }, 00:12:56.522 { 00:12:56.522 "method": "bdev_wait_for_examine" 00:12:56.522 } 00:12:56.522 ] 00:12:56.522 } 00:12:56.522 ] 00:12:56.522 } 00:12:56.821 [2024-11-29 11:57:02.077735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.821 [2024-11-29 11:57:02.202584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.079 [2024-11-29 11:57:02.536545] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:12:57.079 [2024-11-29 11:57:02.536837] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:12:57.079 [2024-11-29 11:57:02.536858] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:12:57.079 [2024-11-29 11:57:02.536870] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:57.644 [2024-11-29 11:57:02.973731] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:12:57.644 11:57:03 -- common/autotest_common.sh@653 -- # es=237 00:12:57.644 11:57:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:57.644 11:57:03 -- common/autotest_common.sh@662 -- # es=109 00:12:57.644 11:57:03 -- common/autotest_common.sh@663 -- # case "$es" in 00:12:57.644 11:57:03 -- common/autotest_common.sh@670 -- # es=1 00:12:57.644 11:57:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:57.644 11:57:03 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:12:57.644 11:57:03 -- dd/common.sh@172 -- # local id=1 00:12:57.644 11:57:03 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:12:57.644 11:57:03 -- dd/common.sh@176 -- # echo 1 00:12:57.644 11:57:03 -- dd/common.sh@177 -- # echo 1 00:12:57.644 11:57:03 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:12:57.900 00:12:57.901 real 0m17.626s 00:12:57.901 user 0m10.295s 00:12:57.901 sys 0m6.656s 00:12:57.901 11:57:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:57.901 ************************************ 00:12:57.901 END TEST dd_uring_copy 00:12:57.901 ************************************ 00:12:57.901 11:57:03 -- common/autotest_common.sh@10 -- # set +x 00:12:57.901 ************************************ 00:12:57.901 END TEST spdk_dd_uring 00:12:57.901 ************************************ 00:12:57.901 00:12:57.901 real 0m17.860s 00:12:57.901 user 0m10.423s 00:12:57.901 sys 0m6.766s 00:12:57.901 11:57:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:57.901 11:57:03 -- common/autotest_common.sh@10 -- # set +x 00:12:57.901 11:57:03 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:12:57.901 11:57:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:57.901 11:57:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:57.901 11:57:03 -- common/autotest_common.sh@10 -- # set +x 00:12:58.158 ************************************ 00:12:58.158 START TEST spdk_dd_sparse 00:12:58.158 ************************************ 00:12:58.158 11:57:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:12:58.158 * Looking for test storage... 00:12:58.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:58.158 11:57:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:58.158 11:57:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:58.158 11:57:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:58.158 11:57:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:58.158 11:57:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:58.158 11:57:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:58.158 11:57:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:58.158 11:57:03 -- scripts/common.sh@335 -- # IFS=.-: 00:12:58.158 11:57:03 -- scripts/common.sh@335 -- # read -ra ver1 00:12:58.158 11:57:03 -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.158 11:57:03 -- scripts/common.sh@336 -- # read -ra ver2 00:12:58.158 11:57:03 -- scripts/common.sh@337 -- # local 'op=<' 00:12:58.158 11:57:03 -- scripts/common.sh@339 -- # ver1_l=2 00:12:58.158 11:57:03 -- scripts/common.sh@340 -- # ver2_l=1 00:12:58.158 11:57:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:58.158 11:57:03 -- scripts/common.sh@343 -- # case "$op" in 00:12:58.158 11:57:03 -- scripts/common.sh@344 -- # : 1 00:12:58.158 11:57:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:58.158 11:57:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.158 11:57:03 -- scripts/common.sh@364 -- # decimal 1 00:12:58.158 11:57:03 -- scripts/common.sh@352 -- # local d=1 00:12:58.158 11:57:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.158 11:57:03 -- scripts/common.sh@354 -- # echo 1 00:12:58.158 11:57:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:58.158 11:57:03 -- scripts/common.sh@365 -- # decimal 2 00:12:58.158 11:57:03 -- scripts/common.sh@352 -- # local d=2 00:12:58.158 11:57:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.158 11:57:03 -- scripts/common.sh@354 -- # echo 2 00:12:58.158 11:57:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:58.158 11:57:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:58.158 11:57:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:58.158 11:57:03 -- scripts/common.sh@367 -- # return 0 00:12:58.158 11:57:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.158 11:57:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:58.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.158 --rc genhtml_branch_coverage=1 00:12:58.158 --rc genhtml_function_coverage=1 00:12:58.158 --rc genhtml_legend=1 00:12:58.158 --rc geninfo_all_blocks=1 00:12:58.158 --rc geninfo_unexecuted_blocks=1 00:12:58.158 00:12:58.158 ' 00:12:58.158 11:57:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:58.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.158 --rc genhtml_branch_coverage=1 00:12:58.158 --rc genhtml_function_coverage=1 00:12:58.158 --rc genhtml_legend=1 00:12:58.158 --rc geninfo_all_blocks=1 00:12:58.158 --rc geninfo_unexecuted_blocks=1 00:12:58.158 00:12:58.158 ' 00:12:58.158 11:57:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:58.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.158 --rc genhtml_branch_coverage=1 00:12:58.158 --rc genhtml_function_coverage=1 00:12:58.158 --rc genhtml_legend=1 00:12:58.158 --rc geninfo_all_blocks=1 00:12:58.158 --rc geninfo_unexecuted_blocks=1 00:12:58.158 00:12:58.158 ' 00:12:58.158 11:57:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:58.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.158 --rc genhtml_branch_coverage=1 00:12:58.158 --rc genhtml_function_coverage=1 00:12:58.158 --rc genhtml_legend=1 00:12:58.158 --rc geninfo_all_blocks=1 00:12:58.158 --rc geninfo_unexecuted_blocks=1 00:12:58.158 00:12:58.158 ' 00:12:58.158 11:57:03 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:58.158 11:57:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.158 11:57:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.158 11:57:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.158 11:57:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.158 11:57:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.158 11:57:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.158 11:57:03 -- paths/export.sh@5 -- # export PATH 00:12:58.158 11:57:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.158 11:57:03 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:12:58.158 11:57:03 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:12:58.158 11:57:03 -- dd/sparse.sh@110 -- # file1=file_zero1 00:12:58.158 11:57:03 -- dd/sparse.sh@111 -- # file2=file_zero2 00:12:58.158 11:57:03 -- dd/sparse.sh@112 -- # file3=file_zero3 00:12:58.158 11:57:03 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:12:58.158 11:57:03 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:12:58.158 11:57:03 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:12:58.158 11:57:03 -- dd/sparse.sh@118 -- # prepare 00:12:58.158 11:57:03 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:12:58.158 11:57:03 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:12:58.158 1+0 records in 00:12:58.158 1+0 records out 00:12:58.158 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00646791 s, 648 MB/s 00:12:58.158 11:57:03 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:12:58.158 1+0 records in 00:12:58.158 1+0 records out 00:12:58.158 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00765462 s, 548 MB/s 00:12:58.158 11:57:03 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:12:58.158 1+0 records in 00:12:58.158 1+0 records out 00:12:58.158 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00893046 s, 470 MB/s 00:12:58.158 11:57:03 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:12:58.158 11:57:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:58.158 11:57:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:58.158 11:57:03 -- common/autotest_common.sh@10 -- # set +x 00:12:58.158 ************************************ 00:12:58.158 START TEST dd_sparse_file_to_file 00:12:58.158 ************************************ 00:12:58.158 11:57:03 -- common/autotest_common.sh@1114 -- # file_to_file 00:12:58.158 11:57:03 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:12:58.158 11:57:03 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:12:58.158 11:57:03 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:58.158 11:57:03 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:12:58.158 11:57:03 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:12:58.158 11:57:03 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:12:58.158 11:57:03 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:12:58.159 11:57:03 -- dd/sparse.sh@41 -- # gen_conf 00:12:58.159 11:57:03 -- dd/common.sh@31 -- # xtrace_disable 00:12:58.159 11:57:03 -- common/autotest_common.sh@10 -- # set +x 00:12:58.416 [2024-11-29 11:57:03.712088] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:58.416 [2024-11-29 11:57:03.712359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71626 ] 00:12:58.416 { 00:12:58.416 "subsystems": [ 00:12:58.416 { 00:12:58.416 "subsystem": "bdev", 00:12:58.416 "config": [ 00:12:58.416 { 00:12:58.416 "params": { 00:12:58.416 "block_size": 4096, 00:12:58.416 "filename": "dd_sparse_aio_disk", 00:12:58.416 "name": "dd_aio" 00:12:58.416 }, 00:12:58.416 "method": "bdev_aio_create" 00:12:58.416 }, 00:12:58.416 { 00:12:58.416 "params": { 00:12:58.416 "lvs_name": "dd_lvstore", 00:12:58.416 "bdev_name": "dd_aio" 00:12:58.416 }, 00:12:58.416 "method": "bdev_lvol_create_lvstore" 00:12:58.416 }, 00:12:58.416 { 00:12:58.416 "method": "bdev_wait_for_examine" 00:12:58.416 } 00:12:58.416 ] 00:12:58.416 } 00:12:58.416 ] 00:12:58.416 } 00:12:58.416 [2024-11-29 11:57:03.847664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.674 [2024-11-29 11:57:03.958751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.674  [2024-11-29T11:57:04.751Z] Copying: 12/36 [MB] (average 1090 MBps) 00:12:59.240 00:12:59.240 11:57:04 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:12:59.240 11:57:04 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:12:59.240 11:57:04 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:12:59.240 11:57:04 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:12:59.240 11:57:04 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:12:59.240 11:57:04 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:12:59.240 11:57:04 -- dd/sparse.sh@52 -- # stat1_b=24576 00:12:59.240 11:57:04 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:12:59.240 11:57:04 -- dd/sparse.sh@53 -- # stat2_b=24576 00:12:59.240 11:57:04 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:12:59.240 00:12:59.240 real 0m0.832s 00:12:59.240 user 0m0.495s 00:12:59.240 sys 0m0.233s 00:12:59.240 11:57:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:59.240 11:57:04 -- common/autotest_common.sh@10 -- # set +x 00:12:59.240 ************************************ 00:12:59.240 END TEST dd_sparse_file_to_file 00:12:59.240 ************************************ 00:12:59.240 11:57:04 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:12:59.240 11:57:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:59.240 11:57:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:59.240 11:57:04 -- common/autotest_common.sh@10 -- # set +x 00:12:59.240 ************************************ 00:12:59.240 START TEST dd_sparse_file_to_bdev 00:12:59.240 ************************************ 00:12:59.240 11:57:04 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:12:59.240 11:57:04 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:59.240 11:57:04 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:12:59.240 11:57:04 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:12:59.240 11:57:04 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:12:59.240 11:57:04 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:12:59.240 11:57:04 -- dd/sparse.sh@73 -- # gen_conf 00:12:59.240 11:57:04 -- dd/common.sh@31 -- # xtrace_disable 00:12:59.240 11:57:04 -- common/autotest_common.sh@10 -- # set +x 00:12:59.240 [2024-11-29 11:57:04.585759] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:59.240 [2024-11-29 11:57:04.585873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71672 ] 00:12:59.240 { 00:12:59.240 "subsystems": [ 00:12:59.240 { 00:12:59.240 "subsystem": "bdev", 00:12:59.240 "config": [ 00:12:59.240 { 00:12:59.240 "params": { 00:12:59.240 "block_size": 4096, 00:12:59.240 "filename": "dd_sparse_aio_disk", 00:12:59.240 "name": "dd_aio" 00:12:59.240 }, 00:12:59.240 "method": "bdev_aio_create" 00:12:59.240 }, 00:12:59.240 { 00:12:59.240 "params": { 00:12:59.240 "lvs_name": "dd_lvstore", 00:12:59.240 "lvol_name": "dd_lvol", 00:12:59.240 "size": 37748736, 00:12:59.240 "thin_provision": true 00:12:59.240 }, 00:12:59.240 "method": "bdev_lvol_create" 00:12:59.240 }, 00:12:59.240 { 00:12:59.240 "method": "bdev_wait_for_examine" 00:12:59.240 } 00:12:59.240 ] 00:12:59.240 } 00:12:59.240 ] 00:12:59.240 } 00:12:59.240 [2024-11-29 11:57:04.723146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.497 [2024-11-29 11:57:04.839989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.497 [2024-11-29 11:57:04.971736] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:12:59.754  [2024-11-29T11:57:05.265Z] Copying: 12/36 [MB] (average 500 MBps)[2024-11-29 11:57:05.017093] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:13:00.013 00:13:00.013 00:13:00.013 00:13:00.013 real 0m0.813s 00:13:00.013 user 0m0.521s 00:13:00.013 sys 0m0.219s 00:13:00.013 11:57:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:00.013 ************************************ 00:13:00.013 END TEST dd_sparse_file_to_bdev 00:13:00.013 ************************************ 00:13:00.013 11:57:05 -- common/autotest_common.sh@10 -- # set +x 00:13:00.013 11:57:05 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:13:00.013 11:57:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:00.013 11:57:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.013 11:57:05 -- common/autotest_common.sh@10 -- # set +x 00:13:00.013 ************************************ 00:13:00.013 START TEST dd_sparse_bdev_to_file 00:13:00.013 ************************************ 00:13:00.013 11:57:05 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:13:00.013 11:57:05 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:13:00.013 11:57:05 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:13:00.013 11:57:05 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:13:00.013 11:57:05 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:13:00.013 11:57:05 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:13:00.013 11:57:05 -- dd/sparse.sh@91 -- # gen_conf 00:13:00.013 11:57:05 -- dd/common.sh@31 -- # xtrace_disable 00:13:00.013 11:57:05 -- common/autotest_common.sh@10 -- # set +x 00:13:00.013 [2024-11-29 11:57:05.448466] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:00.013 [2024-11-29 11:57:05.448935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71709 ] 00:13:00.013 { 00:13:00.013 "subsystems": [ 00:13:00.013 { 00:13:00.013 "subsystem": "bdev", 00:13:00.013 "config": [ 00:13:00.013 { 00:13:00.013 "params": { 00:13:00.013 "block_size": 4096, 00:13:00.013 "filename": "dd_sparse_aio_disk", 00:13:00.013 "name": "dd_aio" 00:13:00.013 }, 00:13:00.013 "method": "bdev_aio_create" 00:13:00.013 }, 00:13:00.013 { 00:13:00.013 "method": "bdev_wait_for_examine" 00:13:00.013 } 00:13:00.013 ] 00:13:00.013 } 00:13:00.013 ] 00:13:00.013 } 00:13:00.272 [2024-11-29 11:57:05.582819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.272 [2024-11-29 11:57:05.698371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.530  [2024-11-29T11:57:06.300Z] Copying: 12/36 [MB] (average 857 MBps) 00:13:00.789 00:13:00.789 11:57:06 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:13:00.789 11:57:06 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:13:00.789 11:57:06 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:13:00.789 11:57:06 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:13:00.789 11:57:06 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:13:00.789 11:57:06 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:13:00.789 11:57:06 -- dd/sparse.sh@102 -- # stat2_b=24576 00:13:00.789 11:57:06 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:13:00.789 ************************************ 00:13:00.789 END TEST dd_sparse_bdev_to_file 00:13:00.789 ************************************ 00:13:00.789 11:57:06 -- dd/sparse.sh@103 -- # stat3_b=24576 00:13:00.789 11:57:06 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:13:00.789 00:13:00.789 real 0m0.830s 00:13:00.789 user 0m0.502s 00:13:00.789 sys 0m0.231s 00:13:00.789 11:57:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:00.789 11:57:06 -- common/autotest_common.sh@10 -- # set +x 00:13:00.789 11:57:06 -- dd/sparse.sh@1 -- # cleanup 00:13:00.789 11:57:06 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:13:00.789 11:57:06 -- dd/sparse.sh@12 -- # rm file_zero1 00:13:00.789 11:57:06 -- dd/sparse.sh@13 -- # rm file_zero2 00:13:00.789 11:57:06 -- dd/sparse.sh@14 -- # rm file_zero3 00:13:01.048 ************************************ 00:13:01.048 END TEST spdk_dd_sparse 00:13:01.048 ************************************ 00:13:01.048 00:13:01.048 real 0m2.883s 00:13:01.048 user 0m1.681s 00:13:01.048 sys 0m0.918s 00:13:01.048 11:57:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.048 11:57:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.048 11:57:06 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:13:01.048 11:57:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:01.048 11:57:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.048 11:57:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.048 ************************************ 00:13:01.048 START TEST spdk_dd_negative 00:13:01.048 ************************************ 00:13:01.048 11:57:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:13:01.048 * Looking for test storage... 00:13:01.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:13:01.048 11:57:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:01.048 11:57:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:01.048 11:57:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:01.048 11:57:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:01.048 11:57:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:01.048 11:57:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:01.048 11:57:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:01.048 11:57:06 -- scripts/common.sh@335 -- # IFS=.-: 00:13:01.048 11:57:06 -- scripts/common.sh@335 -- # read -ra ver1 00:13:01.048 11:57:06 -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.048 11:57:06 -- scripts/common.sh@336 -- # read -ra ver2 00:13:01.048 11:57:06 -- scripts/common.sh@337 -- # local 'op=<' 00:13:01.048 11:57:06 -- scripts/common.sh@339 -- # ver1_l=2 00:13:01.048 11:57:06 -- scripts/common.sh@340 -- # ver2_l=1 00:13:01.048 11:57:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:01.048 11:57:06 -- scripts/common.sh@343 -- # case "$op" in 00:13:01.048 11:57:06 -- scripts/common.sh@344 -- # : 1 00:13:01.048 11:57:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:01.048 11:57:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.048 11:57:06 -- scripts/common.sh@364 -- # decimal 1 00:13:01.048 11:57:06 -- scripts/common.sh@352 -- # local d=1 00:13:01.048 11:57:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.048 11:57:06 -- scripts/common.sh@354 -- # echo 1 00:13:01.048 11:57:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:01.048 11:57:06 -- scripts/common.sh@365 -- # decimal 2 00:13:01.048 11:57:06 -- scripts/common.sh@352 -- # local d=2 00:13:01.048 11:57:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:01.048 11:57:06 -- scripts/common.sh@354 -- # echo 2 00:13:01.048 11:57:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:01.048 11:57:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:01.048 11:57:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:01.048 11:57:06 -- scripts/common.sh@367 -- # return 0 00:13:01.048 11:57:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:01.048 11:57:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:01.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.048 --rc genhtml_branch_coverage=1 00:13:01.048 --rc genhtml_function_coverage=1 00:13:01.048 --rc genhtml_legend=1 00:13:01.048 --rc geninfo_all_blocks=1 00:13:01.048 --rc geninfo_unexecuted_blocks=1 00:13:01.048 00:13:01.048 ' 00:13:01.048 11:57:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:01.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.048 --rc genhtml_branch_coverage=1 00:13:01.048 --rc genhtml_function_coverage=1 00:13:01.048 --rc genhtml_legend=1 00:13:01.048 --rc geninfo_all_blocks=1 00:13:01.048 --rc geninfo_unexecuted_blocks=1 00:13:01.048 00:13:01.048 ' 00:13:01.048 11:57:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:01.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.048 --rc genhtml_branch_coverage=1 00:13:01.048 --rc genhtml_function_coverage=1 00:13:01.048 --rc genhtml_legend=1 00:13:01.048 --rc geninfo_all_blocks=1 00:13:01.048 --rc geninfo_unexecuted_blocks=1 00:13:01.048 00:13:01.048 ' 00:13:01.048 11:57:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:01.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.048 --rc genhtml_branch_coverage=1 00:13:01.048 --rc genhtml_function_coverage=1 00:13:01.048 --rc genhtml_legend=1 00:13:01.048 --rc geninfo_all_blocks=1 00:13:01.048 --rc geninfo_unexecuted_blocks=1 00:13:01.048 00:13:01.048 ' 00:13:01.048 11:57:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:01.048 11:57:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.048 11:57:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.048 11:57:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.048 11:57:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.048 11:57:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.048 11:57:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.048 11:57:06 -- paths/export.sh@5 -- # export PATH 00:13:01.048 11:57:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.048 11:57:06 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:01.048 11:57:06 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:01.048 11:57:06 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:01.048 11:57:06 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:01.048 11:57:06 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:13:01.048 11:57:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:01.048 11:57:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.048 11:57:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.049 ************************************ 00:13:01.049 START TEST dd_invalid_arguments 00:13:01.049 ************************************ 00:13:01.049 11:57:06 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:13:01.049 11:57:06 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:13:01.308 11:57:06 -- common/autotest_common.sh@650 -- # local es=0 00:13:01.308 11:57:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:13:01.308 11:57:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.308 11:57:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.308 11:57:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.308 11:57:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.308 11:57:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.308 11:57:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.308 11:57:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.308 11:57:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:01.308 11:57:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:13:01.308 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:13:01.308 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:13:01.308 options: 00:13:01.308 -c, --config JSON config file (default none) 00:13:01.308 --json JSON config file (default none) 00:13:01.308 --json-ignore-init-errors 00:13:01.308 don't exit on invalid config entry 00:13:01.308 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:13:01.308 -g, --single-file-segments 00:13:01.308 force creating just one hugetlbfs file 00:13:01.308 -h, --help show this usage 00:13:01.308 -i, --shm-id shared memory ID (optional) 00:13:01.308 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:13:01.308 --lcores lcore to CPU mapping list. The list is in the format: 00:13:01.308 [<,lcores[@CPUs]>...] 00:13:01.308 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:13:01.308 Within the group, '-' is used for range separator, 00:13:01.308 ',' is used for single number separator. 00:13:01.308 '( )' can be omitted for single element group, 00:13:01.308 '@' can be omitted if cpus and lcores have the same value 00:13:01.308 -n, --mem-channels channel number of memory channels used for DPDK 00:13:01.308 -p, --main-core main (primary) core for DPDK 00:13:01.308 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:13:01.308 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:13:01.308 --disable-cpumask-locks Disable CPU core lock files. 00:13:01.308 --silence-noticelog disable notice level logging to stderr 00:13:01.308 --msg-mempool-size global message memory pool size in count (default: 262143) 00:13:01.308 -u, --no-pci disable PCI access 00:13:01.308 --wait-for-rpc wait for RPCs to initialize subsystems 00:13:01.308 --max-delay maximum reactor delay (in microseconds) 00:13:01.308 -B, --pci-blocked pci addr to block (can be used more than once) 00:13:01.308 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:13:01.308 -R, --huge-unlink unlink huge files after initialization 00:13:01.308 -v, --version print SPDK version 00:13:01.308 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:13:01.308 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:13:01.308 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:13:01.308 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:13:01.308 Tracepoints vary in size and can use more than one trace entry. 00:13:01.308 --rpcs-allowed comma-separated list of permitted RPCS 00:13:01.308 --env-context Opaque context for use of the env implementation 00:13:01.308 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:13:01.308 --no-huge run without using hugepages 00:13:01.308 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:13:01.308 -e, --tpoint-group [:] 00:13:01.309 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:13:01.309 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:13:01.309 Groups and masks [2024-11-29 11:57:06.602604] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:13:01.309 can be combined (e.g. thread,bdev:0x1). 00:13:01.309 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:13:01.309 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:13:01.309 [--------- DD Options ---------] 00:13:01.309 --if Input file. Must specify either --if or --ib. 00:13:01.309 --ib Input bdev. Must specifier either --if or --ib 00:13:01.309 --of Output file. Must specify either --of or --ob. 00:13:01.309 --ob Output bdev. Must specify either --of or --ob. 00:13:01.309 --iflag Input file flags. 00:13:01.309 --oflag Output file flags. 00:13:01.309 --bs I/O unit size (default: 4096) 00:13:01.309 --qd Queue depth (default: 2) 00:13:01.309 --count I/O unit count. The number of I/O units to copy. (default: all) 00:13:01.309 --skip Skip this many I/O units at start of input. (default: 0) 00:13:01.309 --seek Skip this many I/O units at start of output. (default: 0) 00:13:01.309 --aio Force usage of AIO. (by default io_uring is used if available) 00:13:01.309 --sparse Enable hole skipping in input target 00:13:01.309 Available iflag and oflag values: 00:13:01.309 append - append mode 00:13:01.309 direct - use direct I/O for data 00:13:01.309 directory - fail unless a directory 00:13:01.309 dsync - use synchronized I/O for data 00:13:01.309 noatime - do not update access time 00:13:01.309 noctty - do not assign controlling terminal from file 00:13:01.309 nofollow - do not follow symlinks 00:13:01.309 nonblock - use non-blocking I/O 00:13:01.309 sync - use synchronized I/O for data and metadata 00:13:01.309 ************************************ 00:13:01.309 END TEST dd_invalid_arguments 00:13:01.309 ************************************ 00:13:01.309 11:57:06 -- common/autotest_common.sh@653 -- # es=2 00:13:01.309 11:57:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:01.309 11:57:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:01.309 11:57:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:01.309 00:13:01.309 real 0m0.062s 00:13:01.309 user 0m0.034s 00:13:01.309 sys 0m0.026s 00:13:01.309 11:57:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.309 11:57:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.309 11:57:06 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:13:01.309 11:57:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:01.309 11:57:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.309 11:57:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.309 ************************************ 00:13:01.309 START TEST dd_double_input 00:13:01.309 ************************************ 00:13:01.309 11:57:06 -- common/autotest_common.sh@1114 -- # double_input 00:13:01.309 11:57:06 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:13:01.309 11:57:06 -- common/autotest_common.sh@650 -- # local es=0 00:13:01.309 11:57:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:13:01.309 11:57:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.309 11:57:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.309 11:57:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.309 11:57:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.309 11:57:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.309 11:57:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.309 11:57:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.309 11:57:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:01.309 11:57:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:13:01.309 [2024-11-29 11:57:06.720314] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:13:01.309 11:57:06 -- common/autotest_common.sh@653 -- # es=22 00:13:01.309 11:57:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:01.309 ************************************ 00:13:01.309 END TEST dd_double_input 00:13:01.309 ************************************ 00:13:01.309 11:57:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:01.309 11:57:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:01.309 00:13:01.309 real 0m0.063s 00:13:01.309 user 0m0.036s 00:13:01.309 sys 0m0.026s 00:13:01.309 11:57:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.309 11:57:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.309 11:57:06 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:13:01.309 11:57:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:01.309 11:57:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.309 11:57:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.309 ************************************ 00:13:01.309 START TEST dd_double_output 00:13:01.309 ************************************ 00:13:01.309 11:57:06 -- common/autotest_common.sh@1114 -- # double_output 00:13:01.309 11:57:06 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:13:01.309 11:57:06 -- common/autotest_common.sh@650 -- # local es=0 00:13:01.309 11:57:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:13:01.309 11:57:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.309 11:57:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.309 11:57:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.309 11:57:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.309 11:57:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.309 11:57:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.309 11:57:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.309 11:57:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:01.309 11:57:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:13:01.568 [2024-11-29 11:57:06.835421] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:13:01.568 11:57:06 -- common/autotest_common.sh@653 -- # es=22 00:13:01.568 11:57:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:01.568 ************************************ 00:13:01.568 END TEST dd_double_output 00:13:01.568 ************************************ 00:13:01.568 11:57:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:01.568 11:57:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:01.568 00:13:01.568 real 0m0.069s 00:13:01.568 user 0m0.042s 00:13:01.568 sys 0m0.026s 00:13:01.568 11:57:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.568 11:57:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.568 11:57:06 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:13:01.568 11:57:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:01.568 11:57:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.568 11:57:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.568 ************************************ 00:13:01.568 START TEST dd_no_input 00:13:01.568 ************************************ 00:13:01.568 11:57:06 -- common/autotest_common.sh@1114 -- # no_input 00:13:01.568 11:57:06 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:13:01.568 11:57:06 -- common/autotest_common.sh@650 -- # local es=0 00:13:01.568 11:57:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:13:01.568 11:57:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.568 11:57:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.568 11:57:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.568 11:57:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.568 11:57:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.568 11:57:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.568 11:57:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.568 11:57:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:01.568 11:57:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:13:01.568 [2024-11-29 11:57:06.962576] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:13:01.568 11:57:06 -- common/autotest_common.sh@653 -- # es=22 00:13:01.568 11:57:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:01.568 11:57:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:01.568 11:57:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:01.568 00:13:01.568 real 0m0.073s 00:13:01.568 user 0m0.041s 00:13:01.568 sys 0m0.030s 00:13:01.568 11:57:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.568 11:57:06 -- common/autotest_common.sh@10 -- # set +x 00:13:01.568 ************************************ 00:13:01.568 END TEST dd_no_input 00:13:01.568 ************************************ 00:13:01.568 11:57:07 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:13:01.568 11:57:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:01.568 11:57:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.568 11:57:07 -- common/autotest_common.sh@10 -- # set +x 00:13:01.568 ************************************ 00:13:01.568 START TEST dd_no_output 00:13:01.568 ************************************ 00:13:01.568 11:57:07 -- common/autotest_common.sh@1114 -- # no_output 00:13:01.568 11:57:07 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:01.568 11:57:07 -- common/autotest_common.sh@650 -- # local es=0 00:13:01.569 11:57:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:01.569 11:57:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.569 11:57:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.569 11:57:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.569 11:57:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.569 11:57:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.569 11:57:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.569 11:57:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.569 11:57:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:01.569 11:57:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:01.827 [2024-11-29 11:57:07.081230] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:13:01.827 11:57:07 -- common/autotest_common.sh@653 -- # es=22 00:13:01.827 11:57:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:01.827 11:57:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:01.827 ************************************ 00:13:01.827 END TEST dd_no_output 00:13:01.827 ************************************ 00:13:01.827 11:57:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:01.827 00:13:01.827 real 0m0.069s 00:13:01.827 user 0m0.044s 00:13:01.827 sys 0m0.024s 00:13:01.827 11:57:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.827 11:57:07 -- common/autotest_common.sh@10 -- # set +x 00:13:01.827 11:57:07 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:13:01.827 11:57:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:01.827 11:57:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.827 11:57:07 -- common/autotest_common.sh@10 -- # set +x 00:13:01.827 ************************************ 00:13:01.827 START TEST dd_wrong_blocksize 00:13:01.827 ************************************ 00:13:01.827 11:57:07 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:13:01.827 11:57:07 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:13:01.827 11:57:07 -- common/autotest_common.sh@650 -- # local es=0 00:13:01.828 11:57:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:13:01.828 11:57:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.828 11:57:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.828 11:57:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.828 11:57:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.828 11:57:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.828 11:57:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.828 11:57:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.828 11:57:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:01.828 11:57:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:13:01.828 [2024-11-29 11:57:07.201896] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:13:01.828 11:57:07 -- common/autotest_common.sh@653 -- # es=22 00:13:01.828 11:57:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:01.828 11:57:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:01.828 11:57:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:01.828 00:13:01.828 real 0m0.065s 00:13:01.828 user 0m0.032s 00:13:01.828 sys 0m0.032s 00:13:01.828 11:57:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.828 11:57:07 -- common/autotest_common.sh@10 -- # set +x 00:13:01.828 ************************************ 00:13:01.828 END TEST dd_wrong_blocksize 00:13:01.828 ************************************ 00:13:01.828 11:57:07 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:13:01.828 11:57:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:01.828 11:57:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.828 11:57:07 -- common/autotest_common.sh@10 -- # set +x 00:13:01.828 ************************************ 00:13:01.828 START TEST dd_smaller_blocksize 00:13:01.828 ************************************ 00:13:01.828 11:57:07 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:13:01.828 11:57:07 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:13:01.828 11:57:07 -- common/autotest_common.sh@650 -- # local es=0 00:13:01.828 11:57:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:13:01.828 11:57:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.828 11:57:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.828 11:57:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.828 11:57:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.828 11:57:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.828 11:57:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.828 11:57:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:01.828 11:57:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:01.828 11:57:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:13:01.828 [2024-11-29 11:57:07.321734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:01.828 [2024-11-29 11:57:07.321867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71938 ] 00:13:02.086 [2024-11-29 11:57:07.464498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.086 [2024-11-29 11:57:07.588016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.346 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:13:02.346 [2024-11-29 11:57:07.704337] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:13:02.346 [2024-11-29 11:57:07.704371] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:02.604 [2024-11-29 11:57:07.863873] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:13:02.604 11:57:07 -- common/autotest_common.sh@653 -- # es=244 00:13:02.604 11:57:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:02.604 11:57:07 -- common/autotest_common.sh@662 -- # es=116 00:13:02.604 11:57:07 -- common/autotest_common.sh@663 -- # case "$es" in 00:13:02.604 11:57:07 -- common/autotest_common.sh@670 -- # es=1 00:13:02.604 11:57:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:02.604 00:13:02.604 real 0m0.713s 00:13:02.604 user 0m0.417s 00:13:02.604 sys 0m0.190s 00:13:02.604 11:57:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:02.604 ************************************ 00:13:02.604 END TEST dd_smaller_blocksize 00:13:02.604 ************************************ 00:13:02.604 11:57:07 -- common/autotest_common.sh@10 -- # set +x 00:13:02.604 11:57:08 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:13:02.604 11:57:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:02.604 11:57:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.604 11:57:08 -- common/autotest_common.sh@10 -- # set +x 00:13:02.604 ************************************ 00:13:02.604 START TEST dd_invalid_count 00:13:02.604 ************************************ 00:13:02.604 11:57:08 -- common/autotest_common.sh@1114 -- # invalid_count 00:13:02.604 11:57:08 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:13:02.604 11:57:08 -- common/autotest_common.sh@650 -- # local es=0 00:13:02.604 11:57:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:13:02.604 11:57:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:02.604 11:57:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.604 11:57:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:02.604 11:57:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.604 11:57:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:02.604 11:57:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.604 11:57:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:02.604 11:57:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:02.604 11:57:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:13:02.604 [2024-11-29 11:57:08.088749] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:13:02.604 11:57:08 -- common/autotest_common.sh@653 -- # es=22 00:13:02.604 11:57:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:02.604 11:57:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:02.604 11:57:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:02.604 00:13:02.604 real 0m0.074s 00:13:02.604 user 0m0.041s 00:13:02.604 sys 0m0.032s 00:13:02.604 11:57:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:02.604 11:57:08 -- common/autotest_common.sh@10 -- # set +x 00:13:02.604 ************************************ 00:13:02.604 END TEST dd_invalid_count 00:13:02.604 ************************************ 00:13:02.862 11:57:08 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:13:02.862 11:57:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:02.862 11:57:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.862 11:57:08 -- common/autotest_common.sh@10 -- # set +x 00:13:02.862 ************************************ 00:13:02.862 START TEST dd_invalid_oflag 00:13:02.862 ************************************ 00:13:02.862 11:57:08 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:13:02.862 11:57:08 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:13:02.862 11:57:08 -- common/autotest_common.sh@650 -- # local es=0 00:13:02.862 11:57:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:13:02.862 11:57:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:02.862 11:57:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.862 11:57:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:02.862 11:57:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.862 11:57:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:02.862 11:57:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.862 11:57:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:02.862 11:57:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:02.862 11:57:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:13:02.862 [2024-11-29 11:57:08.215363] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:13:02.862 11:57:08 -- common/autotest_common.sh@653 -- # es=22 00:13:02.862 11:57:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:02.862 11:57:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:02.862 11:57:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:02.862 00:13:02.862 real 0m0.074s 00:13:02.862 user 0m0.042s 00:13:02.862 sys 0m0.031s 00:13:02.862 11:57:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:02.862 11:57:08 -- common/autotest_common.sh@10 -- # set +x 00:13:02.862 ************************************ 00:13:02.862 END TEST dd_invalid_oflag 00:13:02.862 ************************************ 00:13:02.862 11:57:08 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:13:02.862 11:57:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:02.863 11:57:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.863 11:57:08 -- common/autotest_common.sh@10 -- # set +x 00:13:02.863 ************************************ 00:13:02.863 START TEST dd_invalid_iflag 00:13:02.863 ************************************ 00:13:02.863 11:57:08 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:13:02.863 11:57:08 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:13:02.863 11:57:08 -- common/autotest_common.sh@650 -- # local es=0 00:13:02.863 11:57:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:13:02.863 11:57:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:02.863 11:57:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.863 11:57:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:02.863 11:57:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.863 11:57:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:02.863 11:57:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.863 11:57:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:02.863 11:57:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:02.863 11:57:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:13:02.863 [2024-11-29 11:57:08.338688] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:13:02.863 11:57:08 -- common/autotest_common.sh@653 -- # es=22 00:13:02.863 11:57:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:02.863 11:57:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:02.863 11:57:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:02.863 00:13:02.863 real 0m0.074s 00:13:02.863 user 0m0.046s 00:13:02.863 sys 0m0.026s 00:13:02.863 11:57:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:02.863 11:57:08 -- common/autotest_common.sh@10 -- # set +x 00:13:02.863 ************************************ 00:13:02.863 END TEST dd_invalid_iflag 00:13:02.863 ************************************ 00:13:03.121 11:57:08 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:13:03.121 11:57:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:03.121 11:57:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:03.121 11:57:08 -- common/autotest_common.sh@10 -- # set +x 00:13:03.121 ************************************ 00:13:03.121 START TEST dd_unknown_flag 00:13:03.121 ************************************ 00:13:03.121 11:57:08 -- common/autotest_common.sh@1114 -- # unknown_flag 00:13:03.121 11:57:08 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:13:03.121 11:57:08 -- common/autotest_common.sh@650 -- # local es=0 00:13:03.121 11:57:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:13:03.121 11:57:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:03.121 11:57:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.121 11:57:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:03.121 11:57:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.121 11:57:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:03.121 11:57:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.121 11:57:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:03.121 11:57:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:03.121 11:57:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:13:03.121 [2024-11-29 11:57:08.465161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:03.121 [2024-11-29 11:57:08.465309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72030 ] 00:13:03.121 [2024-11-29 11:57:08.607157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.380 [2024-11-29 11:57:08.733822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.380 [2024-11-29 11:57:08.853125] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:13:03.380 [2024-11-29 11:57:08.853212] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:13:03.380 [2024-11-29 11:57:08.853228] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:13:03.380 [2024-11-29 11:57:08.853243] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:03.638 [2024-11-29 11:57:08.988957] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:13:03.638 11:57:09 -- common/autotest_common.sh@653 -- # es=236 00:13:03.638 11:57:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:03.638 11:57:09 -- common/autotest_common.sh@662 -- # es=108 00:13:03.638 11:57:09 -- common/autotest_common.sh@663 -- # case "$es" in 00:13:03.638 11:57:09 -- common/autotest_common.sh@670 -- # es=1 00:13:03.638 11:57:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:03.639 00:13:03.639 real 0m0.671s 00:13:03.639 user 0m0.364s 00:13:03.639 sys 0m0.198s 00:13:03.639 11:57:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:03.639 11:57:09 -- common/autotest_common.sh@10 -- # set +x 00:13:03.639 ************************************ 00:13:03.639 END TEST dd_unknown_flag 00:13:03.639 ************************************ 00:13:03.639 11:57:09 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:13:03.639 11:57:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:03.639 11:57:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:03.639 11:57:09 -- common/autotest_common.sh@10 -- # set +x 00:13:03.639 ************************************ 00:13:03.639 START TEST dd_invalid_json 00:13:03.639 ************************************ 00:13:03.639 11:57:09 -- common/autotest_common.sh@1114 -- # invalid_json 00:13:03.639 11:57:09 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:13:03.639 11:57:09 -- common/autotest_common.sh@650 -- # local es=0 00:13:03.639 11:57:09 -- dd/negative_dd.sh@95 -- # : 00:13:03.639 11:57:09 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:13:03.639 11:57:09 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:03.639 11:57:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.639 11:57:09 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:03.639 11:57:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.639 11:57:09 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:03.639 11:57:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:03.639 11:57:09 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:03.639 11:57:09 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:03.639 11:57:09 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:13:03.898 [2024-11-29 11:57:09.200644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:03.898 [2024-11-29 11:57:09.200771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72056 ] 00:13:03.898 [2024-11-29 11:57:09.338842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.157 [2024-11-29 11:57:09.434196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.157 [2024-11-29 11:57:09.434352] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:13:04.157 [2024-11-29 11:57:09.434373] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:04.157 [2024-11-29 11:57:09.434414] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:13:04.157 11:57:09 -- common/autotest_common.sh@653 -- # es=234 00:13:04.157 11:57:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:04.157 11:57:09 -- common/autotest_common.sh@662 -- # es=106 00:13:04.157 11:57:09 -- common/autotest_common.sh@663 -- # case "$es" in 00:13:04.157 11:57:09 -- common/autotest_common.sh@670 -- # es=1 00:13:04.157 11:57:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:04.157 00:13:04.157 real 0m0.408s 00:13:04.157 user 0m0.224s 00:13:04.157 sys 0m0.081s 00:13:04.157 11:57:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:04.157 11:57:09 -- common/autotest_common.sh@10 -- # set +x 00:13:04.157 ************************************ 00:13:04.157 END TEST dd_invalid_json 00:13:04.157 ************************************ 00:13:04.157 00:13:04.157 real 0m3.233s 00:13:04.157 user 0m1.688s 00:13:04.157 sys 0m1.180s 00:13:04.157 11:57:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:04.157 11:57:09 -- common/autotest_common.sh@10 -- # set +x 00:13:04.157 ************************************ 00:13:04.157 END TEST spdk_dd_negative 00:13:04.157 ************************************ 00:13:04.157 00:13:04.157 real 1m32.982s 00:13:04.157 user 0m58.919s 00:13:04.157 sys 0m24.827s 00:13:04.157 11:57:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:04.157 11:57:09 -- common/autotest_common.sh@10 -- # set +x 00:13:04.157 ************************************ 00:13:04.157 END TEST spdk_dd 00:13:04.157 ************************************ 00:13:04.416 11:57:09 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:13:04.416 11:57:09 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:13:04.416 11:57:09 -- spdk/autotest.sh@255 -- # timing_exit lib 00:13:04.416 11:57:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:04.416 11:57:09 -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 11:57:09 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:13:04.416 11:57:09 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:13:04.416 11:57:09 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:13:04.416 11:57:09 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:13:04.416 11:57:09 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:13:04.416 11:57:09 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:13:04.416 11:57:09 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:04.416 11:57:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:04.416 11:57:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:04.416 11:57:09 -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 ************************************ 00:13:04.416 START TEST nvmf_tcp 00:13:04.416 ************************************ 00:13:04.416 11:57:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:04.416 * Looking for test storage... 00:13:04.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:04.416 11:57:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:04.416 11:57:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:04.416 11:57:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:04.416 11:57:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:04.416 11:57:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:04.416 11:57:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:04.416 11:57:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:04.416 11:57:09 -- scripts/common.sh@335 -- # IFS=.-: 00:13:04.416 11:57:09 -- scripts/common.sh@335 -- # read -ra ver1 00:13:04.416 11:57:09 -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.416 11:57:09 -- scripts/common.sh@336 -- # read -ra ver2 00:13:04.416 11:57:09 -- scripts/common.sh@337 -- # local 'op=<' 00:13:04.416 11:57:09 -- scripts/common.sh@339 -- # ver1_l=2 00:13:04.416 11:57:09 -- scripts/common.sh@340 -- # ver2_l=1 00:13:04.416 11:57:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:04.416 11:57:09 -- scripts/common.sh@343 -- # case "$op" in 00:13:04.416 11:57:09 -- scripts/common.sh@344 -- # : 1 00:13:04.416 11:57:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:04.416 11:57:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.416 11:57:09 -- scripts/common.sh@364 -- # decimal 1 00:13:04.416 11:57:09 -- scripts/common.sh@352 -- # local d=1 00:13:04.416 11:57:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.416 11:57:09 -- scripts/common.sh@354 -- # echo 1 00:13:04.416 11:57:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:04.416 11:57:09 -- scripts/common.sh@365 -- # decimal 2 00:13:04.416 11:57:09 -- scripts/common.sh@352 -- # local d=2 00:13:04.416 11:57:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.416 11:57:09 -- scripts/common.sh@354 -- # echo 2 00:13:04.416 11:57:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:04.416 11:57:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:04.416 11:57:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:04.416 11:57:09 -- scripts/common.sh@367 -- # return 0 00:13:04.416 11:57:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.416 11:57:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:04.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.416 --rc genhtml_branch_coverage=1 00:13:04.416 --rc genhtml_function_coverage=1 00:13:04.416 --rc genhtml_legend=1 00:13:04.416 --rc geninfo_all_blocks=1 00:13:04.416 --rc geninfo_unexecuted_blocks=1 00:13:04.416 00:13:04.416 ' 00:13:04.416 11:57:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:04.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.416 --rc genhtml_branch_coverage=1 00:13:04.416 --rc genhtml_function_coverage=1 00:13:04.416 --rc genhtml_legend=1 00:13:04.416 --rc geninfo_all_blocks=1 00:13:04.416 --rc geninfo_unexecuted_blocks=1 00:13:04.416 00:13:04.416 ' 00:13:04.416 11:57:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:04.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.416 --rc genhtml_branch_coverage=1 00:13:04.416 --rc genhtml_function_coverage=1 00:13:04.416 --rc genhtml_legend=1 00:13:04.416 --rc geninfo_all_blocks=1 00:13:04.416 --rc geninfo_unexecuted_blocks=1 00:13:04.416 00:13:04.416 ' 00:13:04.416 11:57:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:04.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.416 --rc genhtml_branch_coverage=1 00:13:04.416 --rc genhtml_function_coverage=1 00:13:04.416 --rc genhtml_legend=1 00:13:04.416 --rc geninfo_all_blocks=1 00:13:04.416 --rc geninfo_unexecuted_blocks=1 00:13:04.416 00:13:04.416 ' 00:13:04.416 11:57:09 -- nvmf/nvmf.sh@10 -- # uname -s 00:13:04.416 11:57:09 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:13:04.416 11:57:09 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:04.416 11:57:09 -- nvmf/common.sh@7 -- # uname -s 00:13:04.416 11:57:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.416 11:57:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.416 11:57:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.416 11:57:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.416 11:57:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.416 11:57:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.416 11:57:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.416 11:57:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.416 11:57:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.416 11:57:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.416 11:57:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:13:04.417 11:57:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:13:04.417 11:57:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.417 11:57:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.417 11:57:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:04.417 11:57:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:04.676 11:57:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.676 11:57:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.676 11:57:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.676 11:57:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.676 11:57:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.676 11:57:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.676 11:57:09 -- paths/export.sh@5 -- # export PATH 00:13:04.676 11:57:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.676 11:57:09 -- nvmf/common.sh@46 -- # : 0 00:13:04.676 11:57:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:04.676 11:57:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:04.676 11:57:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:04.676 11:57:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.676 11:57:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.676 11:57:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:04.676 11:57:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:04.676 11:57:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:04.676 11:57:09 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:04.676 11:57:09 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:13:04.676 11:57:09 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:13:04.676 11:57:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:04.676 11:57:09 -- common/autotest_common.sh@10 -- # set +x 00:13:04.676 11:57:09 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:13:04.676 11:57:09 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:04.676 11:57:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:04.676 11:57:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:04.676 11:57:09 -- common/autotest_common.sh@10 -- # set +x 00:13:04.676 ************************************ 00:13:04.676 START TEST nvmf_host_management 00:13:04.676 ************************************ 00:13:04.676 11:57:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:04.676 * Looking for test storage... 00:13:04.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:04.676 11:57:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:04.676 11:57:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:04.676 11:57:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:04.676 11:57:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:04.676 11:57:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:04.676 11:57:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:04.676 11:57:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:04.676 11:57:10 -- scripts/common.sh@335 -- # IFS=.-: 00:13:04.676 11:57:10 -- scripts/common.sh@335 -- # read -ra ver1 00:13:04.676 11:57:10 -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.676 11:57:10 -- scripts/common.sh@336 -- # read -ra ver2 00:13:04.676 11:57:10 -- scripts/common.sh@337 -- # local 'op=<' 00:13:04.676 11:57:10 -- scripts/common.sh@339 -- # ver1_l=2 00:13:04.676 11:57:10 -- scripts/common.sh@340 -- # ver2_l=1 00:13:04.676 11:57:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:04.676 11:57:10 -- scripts/common.sh@343 -- # case "$op" in 00:13:04.676 11:57:10 -- scripts/common.sh@344 -- # : 1 00:13:04.676 11:57:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:04.676 11:57:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.676 11:57:10 -- scripts/common.sh@364 -- # decimal 1 00:13:04.676 11:57:10 -- scripts/common.sh@352 -- # local d=1 00:13:04.676 11:57:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.676 11:57:10 -- scripts/common.sh@354 -- # echo 1 00:13:04.676 11:57:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:04.676 11:57:10 -- scripts/common.sh@365 -- # decimal 2 00:13:04.676 11:57:10 -- scripts/common.sh@352 -- # local d=2 00:13:04.676 11:57:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.676 11:57:10 -- scripts/common.sh@354 -- # echo 2 00:13:04.676 11:57:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:04.676 11:57:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:04.676 11:57:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:04.676 11:57:10 -- scripts/common.sh@367 -- # return 0 00:13:04.676 11:57:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.676 11:57:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:04.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.676 --rc genhtml_branch_coverage=1 00:13:04.676 --rc genhtml_function_coverage=1 00:13:04.676 --rc genhtml_legend=1 00:13:04.676 --rc geninfo_all_blocks=1 00:13:04.676 --rc geninfo_unexecuted_blocks=1 00:13:04.676 00:13:04.676 ' 00:13:04.676 11:57:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:04.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.676 --rc genhtml_branch_coverage=1 00:13:04.676 --rc genhtml_function_coverage=1 00:13:04.676 --rc genhtml_legend=1 00:13:04.676 --rc geninfo_all_blocks=1 00:13:04.676 --rc geninfo_unexecuted_blocks=1 00:13:04.676 00:13:04.676 ' 00:13:04.676 11:57:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:04.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.676 --rc genhtml_branch_coverage=1 00:13:04.676 --rc genhtml_function_coverage=1 00:13:04.676 --rc genhtml_legend=1 00:13:04.676 --rc geninfo_all_blocks=1 00:13:04.676 --rc geninfo_unexecuted_blocks=1 00:13:04.676 00:13:04.676 ' 00:13:04.676 11:57:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:04.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.676 --rc genhtml_branch_coverage=1 00:13:04.676 --rc genhtml_function_coverage=1 00:13:04.676 --rc genhtml_legend=1 00:13:04.676 --rc geninfo_all_blocks=1 00:13:04.676 --rc geninfo_unexecuted_blocks=1 00:13:04.676 00:13:04.676 ' 00:13:04.676 11:57:10 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:04.676 11:57:10 -- nvmf/common.sh@7 -- # uname -s 00:13:04.676 11:57:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.676 11:57:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.676 11:57:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.676 11:57:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.676 11:57:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.676 11:57:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.676 11:57:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.676 11:57:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.676 11:57:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.676 11:57:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.676 11:57:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:13:04.676 11:57:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:13:04.676 11:57:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.676 11:57:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.676 11:57:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:04.676 11:57:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:04.676 11:57:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.676 11:57:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.676 11:57:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.676 11:57:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.677 11:57:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.677 11:57:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.677 11:57:10 -- paths/export.sh@5 -- # export PATH 00:13:04.677 11:57:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.677 11:57:10 -- nvmf/common.sh@46 -- # : 0 00:13:04.677 11:57:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:04.677 11:57:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:04.677 11:57:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:04.677 11:57:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.677 11:57:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.677 11:57:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:04.677 11:57:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:04.677 11:57:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:04.677 11:57:10 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:04.677 11:57:10 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:04.677 11:57:10 -- target/host_management.sh@104 -- # nvmftestinit 00:13:04.677 11:57:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:04.677 11:57:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.677 11:57:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:04.677 11:57:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:04.677 11:57:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:04.677 11:57:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.677 11:57:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.677 11:57:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.677 11:57:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:04.677 11:57:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:04.677 11:57:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:04.677 11:57:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:04.677 11:57:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:04.677 11:57:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:04.677 11:57:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.677 11:57:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.677 11:57:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:04.677 11:57:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:04.677 11:57:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:04.677 11:57:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:04.677 11:57:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:04.677 11:57:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.677 11:57:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:04.677 11:57:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:04.677 11:57:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:04.677 11:57:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:04.677 11:57:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:04.677 Cannot find device "nvmf_init_br" 00:13:04.677 11:57:10 -- nvmf/common.sh@153 -- # true 00:13:04.677 11:57:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:04.935 Cannot find device "nvmf_tgt_br" 00:13:04.935 11:57:10 -- nvmf/common.sh@154 -- # true 00:13:04.935 11:57:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:04.935 Cannot find device "nvmf_tgt_br2" 00:13:04.935 11:57:10 -- nvmf/common.sh@155 -- # true 00:13:04.935 11:57:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:04.935 Cannot find device "nvmf_init_br" 00:13:04.935 11:57:10 -- nvmf/common.sh@156 -- # true 00:13:04.935 11:57:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:04.935 Cannot find device "nvmf_tgt_br" 00:13:04.935 11:57:10 -- nvmf/common.sh@157 -- # true 00:13:04.935 11:57:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:04.935 Cannot find device "nvmf_tgt_br2" 00:13:04.935 11:57:10 -- nvmf/common.sh@158 -- # true 00:13:04.935 11:57:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:04.935 Cannot find device "nvmf_br" 00:13:04.935 11:57:10 -- nvmf/common.sh@159 -- # true 00:13:04.935 11:57:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:04.935 Cannot find device "nvmf_init_if" 00:13:04.935 11:57:10 -- nvmf/common.sh@160 -- # true 00:13:04.935 11:57:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:04.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:04.935 11:57:10 -- nvmf/common.sh@161 -- # true 00:13:04.935 11:57:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:04.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:04.935 11:57:10 -- nvmf/common.sh@162 -- # true 00:13:04.935 11:57:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:04.935 11:57:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:04.935 11:57:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:04.935 11:57:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:04.935 11:57:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:04.935 11:57:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:04.935 11:57:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:04.935 11:57:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:04.935 11:57:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:04.935 11:57:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:04.935 11:57:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:04.935 11:57:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:04.935 11:57:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:04.935 11:57:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:04.935 11:57:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:04.935 11:57:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:04.935 11:57:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:05.194 11:57:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:05.194 11:57:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:05.194 11:57:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:05.194 11:57:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:05.194 11:57:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:05.194 11:57:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:05.194 11:57:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:05.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:13:05.194 00:13:05.194 --- 10.0.0.2 ping statistics --- 00:13:05.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.194 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:13:05.194 11:57:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:05.194 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:05.194 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:13:05.194 00:13:05.194 --- 10.0.0.3 ping statistics --- 00:13:05.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.194 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:05.194 11:57:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:05.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:13:05.194 00:13:05.194 --- 10.0.0.1 ping statistics --- 00:13:05.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.194 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:05.194 11:57:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.194 11:57:10 -- nvmf/common.sh@421 -- # return 0 00:13:05.194 11:57:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:05.195 11:57:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.195 11:57:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:05.195 11:57:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:05.195 11:57:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.195 11:57:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:05.195 11:57:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:05.195 11:57:10 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:05.195 11:57:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:05.195 11:57:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:05.195 11:57:10 -- common/autotest_common.sh@10 -- # set +x 00:13:05.195 ************************************ 00:13:05.195 START TEST nvmf_host_management 00:13:05.195 ************************************ 00:13:05.195 11:57:10 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:13:05.195 11:57:10 -- target/host_management.sh@69 -- # starttarget 00:13:05.195 11:57:10 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:05.195 11:57:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:05.195 11:57:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:05.195 11:57:10 -- common/autotest_common.sh@10 -- # set +x 00:13:05.195 11:57:10 -- nvmf/common.sh@469 -- # nvmfpid=72335 00:13:05.195 11:57:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:05.195 11:57:10 -- nvmf/common.sh@470 -- # waitforlisten 72335 00:13:05.195 11:57:10 -- common/autotest_common.sh@829 -- # '[' -z 72335 ']' 00:13:05.195 11:57:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.195 11:57:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:05.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.195 11:57:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.195 11:57:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:05.195 11:57:10 -- common/autotest_common.sh@10 -- # set +x 00:13:05.195 [2024-11-29 11:57:10.644968] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:05.195 [2024-11-29 11:57:10.645070] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.454 [2024-11-29 11:57:10.784033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.454 [2024-11-29 11:57:10.913836] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:05.454 [2024-11-29 11:57:10.914060] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.454 [2024-11-29 11:57:10.914076] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.454 [2024-11-29 11:57:10.914087] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.454 [2024-11-29 11:57:10.914859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.454 [2024-11-29 11:57:10.915033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.454 [2024-11-29 11:57:10.915198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:05.454 [2024-11-29 11:57:10.915211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.390 11:57:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.390 11:57:11 -- common/autotest_common.sh@862 -- # return 0 00:13:06.390 11:57:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:06.390 11:57:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:06.390 11:57:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.390 11:57:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.390 11:57:11 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:06.390 11:57:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.390 11:57:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.390 [2024-11-29 11:57:11.641955] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.390 11:57:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.390 11:57:11 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:06.390 11:57:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:06.390 11:57:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.390 11:57:11 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:06.390 11:57:11 -- target/host_management.sh@23 -- # cat 00:13:06.390 11:57:11 -- target/host_management.sh@30 -- # rpc_cmd 00:13:06.390 11:57:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.390 11:57:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.390 Malloc0 00:13:06.390 [2024-11-29 11:57:11.731443] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.390 11:57:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.390 11:57:11 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:06.390 11:57:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:06.390 11:57:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.390 11:57:11 -- target/host_management.sh@73 -- # perfpid=72389 00:13:06.390 11:57:11 -- target/host_management.sh@74 -- # waitforlisten 72389 /var/tmp/bdevperf.sock 00:13:06.390 11:57:11 -- common/autotest_common.sh@829 -- # '[' -z 72389 ']' 00:13:06.390 11:57:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:06.390 11:57:11 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:06.390 11:57:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:06.390 11:57:11 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:06.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:06.390 11:57:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:06.390 11:57:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:06.390 11:57:11 -- nvmf/common.sh@520 -- # config=() 00:13:06.390 11:57:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.390 11:57:11 -- nvmf/common.sh@520 -- # local subsystem config 00:13:06.390 11:57:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:06.390 11:57:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:06.390 { 00:13:06.390 "params": { 00:13:06.390 "name": "Nvme$subsystem", 00:13:06.390 "trtype": "$TEST_TRANSPORT", 00:13:06.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:06.390 "adrfam": "ipv4", 00:13:06.390 "trsvcid": "$NVMF_PORT", 00:13:06.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:06.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:06.390 "hdgst": ${hdgst:-false}, 00:13:06.390 "ddgst": ${ddgst:-false} 00:13:06.390 }, 00:13:06.390 "method": "bdev_nvme_attach_controller" 00:13:06.390 } 00:13:06.390 EOF 00:13:06.390 )") 00:13:06.390 11:57:11 -- nvmf/common.sh@542 -- # cat 00:13:06.390 11:57:11 -- nvmf/common.sh@544 -- # jq . 00:13:06.390 11:57:11 -- nvmf/common.sh@545 -- # IFS=, 00:13:06.390 11:57:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:06.390 "params": { 00:13:06.390 "name": "Nvme0", 00:13:06.390 "trtype": "tcp", 00:13:06.390 "traddr": "10.0.0.2", 00:13:06.391 "adrfam": "ipv4", 00:13:06.391 "trsvcid": "4420", 00:13:06.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:06.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:06.391 "hdgst": false, 00:13:06.391 "ddgst": false 00:13:06.391 }, 00:13:06.391 "method": "bdev_nvme_attach_controller" 00:13:06.391 }' 00:13:06.391 [2024-11-29 11:57:11.845065] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:06.391 [2024-11-29 11:57:11.845807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72389 ] 00:13:06.650 [2024-11-29 11:57:11.991706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.650 [2024-11-29 11:57:12.091529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.937 Running I/O for 10 seconds... 00:13:07.504 11:57:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.504 11:57:12 -- common/autotest_common.sh@862 -- # return 0 00:13:07.504 11:57:12 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:07.504 11:57:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.504 11:57:12 -- common/autotest_common.sh@10 -- # set +x 00:13:07.504 11:57:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.504 11:57:12 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:07.504 11:57:12 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:07.504 11:57:12 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:07.504 11:57:12 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:07.504 11:57:12 -- target/host_management.sh@52 -- # local ret=1 00:13:07.504 11:57:12 -- target/host_management.sh@53 -- # local i 00:13:07.504 11:57:12 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:07.504 11:57:12 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:07.504 11:57:12 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:07.504 11:57:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.504 11:57:12 -- common/autotest_common.sh@10 -- # set +x 00:13:07.504 11:57:12 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:07.504 11:57:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.504 11:57:12 -- target/host_management.sh@55 -- # read_io_count=1790 00:13:07.504 11:57:12 -- target/host_management.sh@58 -- # '[' 1790 -ge 100 ']' 00:13:07.504 11:57:12 -- target/host_management.sh@59 -- # ret=0 00:13:07.504 11:57:12 -- target/host_management.sh@60 -- # break 00:13:07.504 11:57:12 -- target/host_management.sh@64 -- # return 0 00:13:07.504 11:57:12 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:07.504 11:57:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.504 11:57:12 -- common/autotest_common.sh@10 -- # set +x 00:13:07.504 [2024-11-29 11:57:12.993005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.504 [2024-11-29 11:57:12.993070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.504 [2024-11-29 11:57:12.993085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.504 [2024-11-29 11:57:12.993095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.504 [2024-11-29 11:57:12.993106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.504 [2024-11-29 11:57:12.993116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.504 [2024-11-29 11:57:12.993126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.504 [2024-11-29 11:57:12.993135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c 11:57:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.504 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.504 [2024-11-29 11:57:12.993147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb696a0 is same with the state(5) to be set 00:13:07.504 11:57:12 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:07.504 11:57:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.504 11:57:12 -- common/autotest_common.sh@10 -- # set +x 00:13:07.504 [2024-11-29 11:57:12.993766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.504 [2024-11-29 11:57:12.993787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.504 [2024-11-29 11:57:12.993808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.504 [2024-11-29 11:57:12.993819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.993831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.993841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.993853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.993862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.993874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.993883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.993895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.993904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.993915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.993924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.993935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.993953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.993965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.993974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.993985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.993995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.505 [2024-11-29 11:57:12.994607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.505 [2024-11-29 11:57:12.994616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.994981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.994992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.995000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.995012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.995020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.995031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.995040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.995051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.995065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.995076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.995086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.995097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.995106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.995116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.995125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.995136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.995145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.995155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.506 [2024-11-29 11:57:12.995169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.506 [2024-11-29 11:57:12.995179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb67120 is same with the state(5) to be set 00:13:07.506 task offset: 115072 on job bdev=Nvme0n1 fails 00:13:07.506 00:13:07.506 Latency(us) 00:13:07.506 [2024-11-29T11:57:13.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.506 [2024-11-29T11:57:13.017Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:07.506 [2024-11-29T11:57:13.017Z] Job: Nvme0n1 ended in about 0.73 seconds with error 00:13:07.506 Verification LBA range: start 0x0 length 0x400 00:13:07.506 Nvme0n1 : 0.73 2617.01 163.56 87.87 0.00 23269.70 1884.16 33363.78 00:13:07.506 [2024-11-29T11:57:13.017Z] =================================================================================================================== 00:13:07.506 [2024-11-29T11:57:13.017Z] Total : 2617.01 163.56 87.87 0.00 23269.70 1884.16 33363.78 00:13:07.506 [2024-11-29 11:57:12.995260] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb67120 was disconnected and freed. reset controller. 00:13:07.506 [2024-11-29 11:57:12.996398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:07.506 [2024-11-29 11:57:12.998774] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:07.506 [2024-11-29 11:57:12.998799] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb696a0 (9): Bad file descriptor 00:13:07.506 11:57:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.506 11:57:12 -- target/host_management.sh@87 -- # sleep 1 00:13:07.506 [2024-11-29 11:57:13.004276] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:08.883 11:57:14 -- target/host_management.sh@91 -- # kill -9 72389 00:13:08.883 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72389) - No such process 00:13:08.883 11:57:14 -- target/host_management.sh@91 -- # true 00:13:08.883 11:57:14 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:08.883 11:57:14 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:08.883 11:57:14 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:08.883 11:57:14 -- nvmf/common.sh@520 -- # config=() 00:13:08.883 11:57:14 -- nvmf/common.sh@520 -- # local subsystem config 00:13:08.883 11:57:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:08.883 11:57:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:08.883 { 00:13:08.883 "params": { 00:13:08.883 "name": "Nvme$subsystem", 00:13:08.883 "trtype": "$TEST_TRANSPORT", 00:13:08.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:08.883 "adrfam": "ipv4", 00:13:08.883 "trsvcid": "$NVMF_PORT", 00:13:08.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:08.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:08.883 "hdgst": ${hdgst:-false}, 00:13:08.883 "ddgst": ${ddgst:-false} 00:13:08.883 }, 00:13:08.883 "method": "bdev_nvme_attach_controller" 00:13:08.883 } 00:13:08.883 EOF 00:13:08.883 )") 00:13:08.883 11:57:14 -- nvmf/common.sh@542 -- # cat 00:13:08.883 11:57:14 -- nvmf/common.sh@544 -- # jq . 00:13:08.883 11:57:14 -- nvmf/common.sh@545 -- # IFS=, 00:13:08.883 11:57:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:08.883 "params": { 00:13:08.883 "name": "Nvme0", 00:13:08.883 "trtype": "tcp", 00:13:08.883 "traddr": "10.0.0.2", 00:13:08.883 "adrfam": "ipv4", 00:13:08.883 "trsvcid": "4420", 00:13:08.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:08.883 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:08.883 "hdgst": false, 00:13:08.883 "ddgst": false 00:13:08.883 }, 00:13:08.883 "method": "bdev_nvme_attach_controller" 00:13:08.883 }' 00:13:08.883 [2024-11-29 11:57:14.061993] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:08.883 [2024-11-29 11:57:14.062110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72427 ] 00:13:08.883 [2024-11-29 11:57:14.203108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.883 [2024-11-29 11:57:14.305169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.141 Running I/O for 1 seconds... 00:13:10.076 00:13:10.076 Latency(us) 00:13:10.076 [2024-11-29T11:57:15.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.076 [2024-11-29T11:57:15.587Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:10.076 Verification LBA range: start 0x0 length 0x400 00:13:10.076 Nvme0n1 : 1.01 2676.49 167.28 0.00 0.00 23540.46 1303.27 29908.25 00:13:10.076 [2024-11-29T11:57:15.587Z] =================================================================================================================== 00:13:10.076 [2024-11-29T11:57:15.587Z] Total : 2676.49 167.28 0.00 0.00 23540.46 1303.27 29908.25 00:13:10.333 11:57:15 -- target/host_management.sh@101 -- # stoptarget 00:13:10.333 11:57:15 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:10.333 11:57:15 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:10.333 11:57:15 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:10.333 11:57:15 -- target/host_management.sh@40 -- # nvmftestfini 00:13:10.333 11:57:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:10.333 11:57:15 -- nvmf/common.sh@116 -- # sync 00:13:10.333 11:57:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:10.333 11:57:15 -- nvmf/common.sh@119 -- # set +e 00:13:10.333 11:57:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:10.333 11:57:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:10.334 rmmod nvme_tcp 00:13:10.334 rmmod nvme_fabrics 00:13:10.334 rmmod nvme_keyring 00:13:10.334 11:57:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:10.334 11:57:15 -- nvmf/common.sh@123 -- # set -e 00:13:10.334 11:57:15 -- nvmf/common.sh@124 -- # return 0 00:13:10.334 11:57:15 -- nvmf/common.sh@477 -- # '[' -n 72335 ']' 00:13:10.334 11:57:15 -- nvmf/common.sh@478 -- # killprocess 72335 00:13:10.334 11:57:15 -- common/autotest_common.sh@936 -- # '[' -z 72335 ']' 00:13:10.334 11:57:15 -- common/autotest_common.sh@940 -- # kill -0 72335 00:13:10.592 11:57:15 -- common/autotest_common.sh@941 -- # uname 00:13:10.592 11:57:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:10.592 11:57:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72335 00:13:10.592 killing process with pid 72335 00:13:10.592 11:57:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:10.592 11:57:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:10.592 11:57:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72335' 00:13:10.592 11:57:15 -- common/autotest_common.sh@955 -- # kill 72335 00:13:10.592 11:57:15 -- common/autotest_common.sh@960 -- # wait 72335 00:13:10.850 [2024-11-29 11:57:16.168304] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:10.850 11:57:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:10.850 11:57:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:10.850 11:57:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:10.850 11:57:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:10.850 11:57:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:10.850 11:57:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.850 11:57:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.850 11:57:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.850 11:57:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:10.850 00:13:10.850 real 0m5.640s 00:13:10.850 user 0m23.558s 00:13:10.850 sys 0m1.428s 00:13:10.850 11:57:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:10.850 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:13:10.850 ************************************ 00:13:10.850 END TEST nvmf_host_management 00:13:10.850 ************************************ 00:13:10.851 11:57:16 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:10.851 00:13:10.851 real 0m6.333s 00:13:10.851 user 0m23.745s 00:13:10.851 sys 0m1.725s 00:13:10.851 11:57:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:10.851 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:13:10.851 ************************************ 00:13:10.851 END TEST nvmf_host_management 00:13:10.851 ************************************ 00:13:10.851 11:57:16 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:10.851 11:57:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:10.851 11:57:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:10.851 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:13:10.851 ************************************ 00:13:10.851 START TEST nvmf_lvol 00:13:10.851 ************************************ 00:13:10.851 11:57:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:11.110 * Looking for test storage... 00:13:11.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:11.111 11:57:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:11.111 11:57:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:11.111 11:57:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:11.111 11:57:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:11.111 11:57:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:11.111 11:57:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:11.111 11:57:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:11.111 11:57:16 -- scripts/common.sh@335 -- # IFS=.-: 00:13:11.111 11:57:16 -- scripts/common.sh@335 -- # read -ra ver1 00:13:11.111 11:57:16 -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.111 11:57:16 -- scripts/common.sh@336 -- # read -ra ver2 00:13:11.111 11:57:16 -- scripts/common.sh@337 -- # local 'op=<' 00:13:11.111 11:57:16 -- scripts/common.sh@339 -- # ver1_l=2 00:13:11.111 11:57:16 -- scripts/common.sh@340 -- # ver2_l=1 00:13:11.111 11:57:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:11.111 11:57:16 -- scripts/common.sh@343 -- # case "$op" in 00:13:11.111 11:57:16 -- scripts/common.sh@344 -- # : 1 00:13:11.111 11:57:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:11.111 11:57:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.111 11:57:16 -- scripts/common.sh@364 -- # decimal 1 00:13:11.111 11:57:16 -- scripts/common.sh@352 -- # local d=1 00:13:11.111 11:57:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.111 11:57:16 -- scripts/common.sh@354 -- # echo 1 00:13:11.111 11:57:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:11.111 11:57:16 -- scripts/common.sh@365 -- # decimal 2 00:13:11.111 11:57:16 -- scripts/common.sh@352 -- # local d=2 00:13:11.111 11:57:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.111 11:57:16 -- scripts/common.sh@354 -- # echo 2 00:13:11.111 11:57:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:11.111 11:57:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:11.111 11:57:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:11.111 11:57:16 -- scripts/common.sh@367 -- # return 0 00:13:11.111 11:57:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.111 11:57:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:11.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.111 --rc genhtml_branch_coverage=1 00:13:11.111 --rc genhtml_function_coverage=1 00:13:11.111 --rc genhtml_legend=1 00:13:11.111 --rc geninfo_all_blocks=1 00:13:11.111 --rc geninfo_unexecuted_blocks=1 00:13:11.111 00:13:11.111 ' 00:13:11.111 11:57:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:11.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.111 --rc genhtml_branch_coverage=1 00:13:11.111 --rc genhtml_function_coverage=1 00:13:11.111 --rc genhtml_legend=1 00:13:11.111 --rc geninfo_all_blocks=1 00:13:11.111 --rc geninfo_unexecuted_blocks=1 00:13:11.111 00:13:11.111 ' 00:13:11.111 11:57:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:11.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.111 --rc genhtml_branch_coverage=1 00:13:11.111 --rc genhtml_function_coverage=1 00:13:11.111 --rc genhtml_legend=1 00:13:11.111 --rc geninfo_all_blocks=1 00:13:11.111 --rc geninfo_unexecuted_blocks=1 00:13:11.111 00:13:11.111 ' 00:13:11.111 11:57:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:11.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.111 --rc genhtml_branch_coverage=1 00:13:11.111 --rc genhtml_function_coverage=1 00:13:11.111 --rc genhtml_legend=1 00:13:11.111 --rc geninfo_all_blocks=1 00:13:11.111 --rc geninfo_unexecuted_blocks=1 00:13:11.111 00:13:11.111 ' 00:13:11.111 11:57:16 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:11.111 11:57:16 -- nvmf/common.sh@7 -- # uname -s 00:13:11.111 11:57:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.111 11:57:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.111 11:57:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.111 11:57:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.111 11:57:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.111 11:57:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.111 11:57:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.111 11:57:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.111 11:57:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.111 11:57:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.111 11:57:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:13:11.111 11:57:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:13:11.111 11:57:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.111 11:57:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.111 11:57:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:11.111 11:57:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:11.111 11:57:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.111 11:57:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.111 11:57:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.111 11:57:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.111 11:57:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.111 11:57:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.111 11:57:16 -- paths/export.sh@5 -- # export PATH 00:13:11.111 11:57:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.111 11:57:16 -- nvmf/common.sh@46 -- # : 0 00:13:11.111 11:57:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:11.111 11:57:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:11.111 11:57:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:11.111 11:57:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.111 11:57:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.111 11:57:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:11.111 11:57:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:11.111 11:57:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:11.111 11:57:16 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:11.111 11:57:16 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:11.111 11:57:16 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:11.111 11:57:16 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:11.111 11:57:16 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:11.111 11:57:16 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:11.111 11:57:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:11.111 11:57:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.111 11:57:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:11.111 11:57:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:11.111 11:57:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:11.111 11:57:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.111 11:57:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.111 11:57:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.111 11:57:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:11.111 11:57:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:11.111 11:57:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:11.111 11:57:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:11.111 11:57:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:11.111 11:57:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:11.111 11:57:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.111 11:57:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.111 11:57:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:11.111 11:57:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:11.111 11:57:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:11.111 11:57:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:11.111 11:57:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:11.111 11:57:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.111 11:57:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:11.111 11:57:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:11.111 11:57:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:11.111 11:57:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:11.111 11:57:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:11.111 11:57:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:11.111 Cannot find device "nvmf_tgt_br" 00:13:11.111 11:57:16 -- nvmf/common.sh@154 -- # true 00:13:11.111 11:57:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:11.111 Cannot find device "nvmf_tgt_br2" 00:13:11.111 11:57:16 -- nvmf/common.sh@155 -- # true 00:13:11.111 11:57:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:11.111 11:57:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:11.112 Cannot find device "nvmf_tgt_br" 00:13:11.112 11:57:16 -- nvmf/common.sh@157 -- # true 00:13:11.112 11:57:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:11.112 Cannot find device "nvmf_tgt_br2" 00:13:11.112 11:57:16 -- nvmf/common.sh@158 -- # true 00:13:11.112 11:57:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:11.370 11:57:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:11.370 11:57:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:11.370 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:11.370 11:57:16 -- nvmf/common.sh@161 -- # true 00:13:11.370 11:57:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:11.370 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:11.370 11:57:16 -- nvmf/common.sh@162 -- # true 00:13:11.370 11:57:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:11.370 11:57:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:11.370 11:57:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:11.370 11:57:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:11.370 11:57:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:11.370 11:57:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:11.370 11:57:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:11.370 11:57:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:11.370 11:57:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:11.370 11:57:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:11.370 11:57:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:11.370 11:57:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:11.370 11:57:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:11.370 11:57:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:11.370 11:57:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:11.370 11:57:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:11.370 11:57:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:11.371 11:57:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:11.371 11:57:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:11.371 11:57:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:11.371 11:57:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:11.371 11:57:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:11.371 11:57:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:11.371 11:57:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:11.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:13:11.371 00:13:11.371 --- 10.0.0.2 ping statistics --- 00:13:11.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.371 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:13:11.371 11:57:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:11.371 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:11.371 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:13:11.371 00:13:11.371 --- 10.0.0.3 ping statistics --- 00:13:11.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.371 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:13:11.371 11:57:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:11.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:13:11.371 00:13:11.371 --- 10.0.0.1 ping statistics --- 00:13:11.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.371 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:11.371 11:57:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.371 11:57:16 -- nvmf/common.sh@421 -- # return 0 00:13:11.371 11:57:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:11.371 11:57:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.371 11:57:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:11.371 11:57:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:11.371 11:57:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.371 11:57:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:11.371 11:57:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:11.629 11:57:16 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:11.629 11:57:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:11.629 11:57:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:11.629 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:13:11.629 11:57:16 -- nvmf/common.sh@469 -- # nvmfpid=72666 00:13:11.629 11:57:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:11.629 11:57:16 -- nvmf/common.sh@470 -- # waitforlisten 72666 00:13:11.629 11:57:16 -- common/autotest_common.sh@829 -- # '[' -z 72666 ']' 00:13:11.629 11:57:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.629 11:57:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:11.629 11:57:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.629 11:57:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:11.629 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:13:11.629 [2024-11-29 11:57:16.960109] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:11.629 [2024-11-29 11:57:16.960213] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.629 [2024-11-29 11:57:17.096594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:11.888 [2024-11-29 11:57:17.194491] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:11.888 [2024-11-29 11:57:17.194693] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.888 [2024-11-29 11:57:17.194708] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.888 [2024-11-29 11:57:17.194720] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.888 [2024-11-29 11:57:17.194829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.888 [2024-11-29 11:57:17.195328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.888 [2024-11-29 11:57:17.195340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.823 11:57:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.823 11:57:18 -- common/autotest_common.sh@862 -- # return 0 00:13:12.823 11:57:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:12.823 11:57:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:12.823 11:57:18 -- common/autotest_common.sh@10 -- # set +x 00:13:12.823 11:57:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.823 11:57:18 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:13.081 [2024-11-29 11:57:18.389617] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.081 11:57:18 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:13.340 11:57:18 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:13.340 11:57:18 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:13.598 11:57:19 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:13.598 11:57:19 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:13.856 11:57:19 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:14.424 11:57:19 -- target/nvmf_lvol.sh@29 -- # lvs=f8503228-cd26-4e24-91b0-0c804f15e901 00:13:14.424 11:57:19 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f8503228-cd26-4e24-91b0-0c804f15e901 lvol 20 00:13:14.424 11:57:19 -- target/nvmf_lvol.sh@32 -- # lvol=1dbae488-883a-48bc-b407-c559de45f5bd 00:13:14.424 11:57:19 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:15.005 11:57:20 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1dbae488-883a-48bc-b407-c559de45f5bd 00:13:15.005 11:57:20 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:15.273 [2024-11-29 11:57:20.719985] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.273 11:57:20 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:15.840 11:57:21 -- target/nvmf_lvol.sh@42 -- # perf_pid=72742 00:13:15.840 11:57:21 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:15.840 11:57:21 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:16.775 11:57:22 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 1dbae488-883a-48bc-b407-c559de45f5bd MY_SNAPSHOT 00:13:17.033 11:57:22 -- target/nvmf_lvol.sh@47 -- # snapshot=fcbd7362-e471-43ea-bc13-0b66addcfc44 00:13:17.033 11:57:22 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 1dbae488-883a-48bc-b407-c559de45f5bd 30 00:13:17.290 11:57:22 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone fcbd7362-e471-43ea-bc13-0b66addcfc44 MY_CLONE 00:13:17.548 11:57:22 -- target/nvmf_lvol.sh@49 -- # clone=55635770-b617-4635-bdb6-f6dd0301e554 00:13:17.548 11:57:22 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 55635770-b617-4635-bdb6-f6dd0301e554 00:13:18.114 11:57:23 -- target/nvmf_lvol.sh@53 -- # wait 72742 00:13:26.229 Initializing NVMe Controllers 00:13:26.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:26.229 Controller IO queue size 128, less than required. 00:13:26.229 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:26.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:26.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:26.229 Initialization complete. Launching workers. 00:13:26.229 ======================================================== 00:13:26.229 Latency(us) 00:13:26.229 Device Information : IOPS MiB/s Average min max 00:13:26.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7999.00 31.25 16015.91 2537.74 77199.07 00:13:26.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7426.10 29.01 17254.94 3549.91 75050.36 00:13:26.229 ======================================================== 00:13:26.229 Total : 15425.10 60.25 16612.41 2537.74 77199.07 00:13:26.229 00:13:26.229 11:57:31 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:26.487 11:57:31 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1dbae488-883a-48bc-b407-c559de45f5bd 00:13:26.746 11:57:32 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f8503228-cd26-4e24-91b0-0c804f15e901 00:13:27.004 11:57:32 -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:27.004 11:57:32 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:27.004 11:57:32 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:27.004 11:57:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:27.004 11:57:32 -- nvmf/common.sh@116 -- # sync 00:13:27.004 11:57:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:27.004 11:57:32 -- nvmf/common.sh@119 -- # set +e 00:13:27.004 11:57:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:27.004 11:57:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:27.004 rmmod nvme_tcp 00:13:27.004 rmmod nvme_fabrics 00:13:27.004 rmmod nvme_keyring 00:13:27.004 11:57:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:27.004 11:57:32 -- nvmf/common.sh@123 -- # set -e 00:13:27.004 11:57:32 -- nvmf/common.sh@124 -- # return 0 00:13:27.004 11:57:32 -- nvmf/common.sh@477 -- # '[' -n 72666 ']' 00:13:27.004 11:57:32 -- nvmf/common.sh@478 -- # killprocess 72666 00:13:27.004 11:57:32 -- common/autotest_common.sh@936 -- # '[' -z 72666 ']' 00:13:27.004 11:57:32 -- common/autotest_common.sh@940 -- # kill -0 72666 00:13:27.004 11:57:32 -- common/autotest_common.sh@941 -- # uname 00:13:27.004 11:57:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:27.004 11:57:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72666 00:13:27.004 11:57:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:27.004 killing process with pid 72666 00:13:27.004 11:57:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:27.004 11:57:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72666' 00:13:27.004 11:57:32 -- common/autotest_common.sh@955 -- # kill 72666 00:13:27.004 11:57:32 -- common/autotest_common.sh@960 -- # wait 72666 00:13:27.262 11:57:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:27.262 11:57:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:27.262 11:57:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:27.262 11:57:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:27.262 11:57:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:27.262 11:57:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.262 11:57:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.262 11:57:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.578 11:57:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:27.578 ************************************ 00:13:27.578 END TEST nvmf_lvol 00:13:27.578 ************************************ 00:13:27.578 00:13:27.578 real 0m16.458s 00:13:27.578 user 1m7.679s 00:13:27.578 sys 0m4.686s 00:13:27.578 11:57:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:27.578 11:57:32 -- common/autotest_common.sh@10 -- # set +x 00:13:27.578 11:57:32 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:27.578 11:57:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:27.578 11:57:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:27.578 11:57:32 -- common/autotest_common.sh@10 -- # set +x 00:13:27.578 ************************************ 00:13:27.578 START TEST nvmf_lvs_grow 00:13:27.578 ************************************ 00:13:27.578 11:57:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:27.578 * Looking for test storage... 00:13:27.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:27.578 11:57:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:27.578 11:57:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:27.578 11:57:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:27.578 11:57:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:27.578 11:57:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:27.578 11:57:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:27.578 11:57:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:27.578 11:57:33 -- scripts/common.sh@335 -- # IFS=.-: 00:13:27.578 11:57:33 -- scripts/common.sh@335 -- # read -ra ver1 00:13:27.578 11:57:33 -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.578 11:57:33 -- scripts/common.sh@336 -- # read -ra ver2 00:13:27.578 11:57:33 -- scripts/common.sh@337 -- # local 'op=<' 00:13:27.578 11:57:33 -- scripts/common.sh@339 -- # ver1_l=2 00:13:27.578 11:57:33 -- scripts/common.sh@340 -- # ver2_l=1 00:13:27.578 11:57:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:27.578 11:57:33 -- scripts/common.sh@343 -- # case "$op" in 00:13:27.578 11:57:33 -- scripts/common.sh@344 -- # : 1 00:13:27.578 11:57:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:27.578 11:57:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.579 11:57:33 -- scripts/common.sh@364 -- # decimal 1 00:13:27.579 11:57:33 -- scripts/common.sh@352 -- # local d=1 00:13:27.579 11:57:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.579 11:57:33 -- scripts/common.sh@354 -- # echo 1 00:13:27.579 11:57:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:27.579 11:57:33 -- scripts/common.sh@365 -- # decimal 2 00:13:27.579 11:57:33 -- scripts/common.sh@352 -- # local d=2 00:13:27.579 11:57:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.579 11:57:33 -- scripts/common.sh@354 -- # echo 2 00:13:27.579 11:57:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:27.579 11:57:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:27.579 11:57:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:27.579 11:57:33 -- scripts/common.sh@367 -- # return 0 00:13:27.579 11:57:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.579 11:57:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:27.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.579 --rc genhtml_branch_coverage=1 00:13:27.579 --rc genhtml_function_coverage=1 00:13:27.579 --rc genhtml_legend=1 00:13:27.579 --rc geninfo_all_blocks=1 00:13:27.579 --rc geninfo_unexecuted_blocks=1 00:13:27.579 00:13:27.579 ' 00:13:27.579 11:57:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:27.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.579 --rc genhtml_branch_coverage=1 00:13:27.579 --rc genhtml_function_coverage=1 00:13:27.579 --rc genhtml_legend=1 00:13:27.579 --rc geninfo_all_blocks=1 00:13:27.579 --rc geninfo_unexecuted_blocks=1 00:13:27.579 00:13:27.579 ' 00:13:27.579 11:57:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:27.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.579 --rc genhtml_branch_coverage=1 00:13:27.579 --rc genhtml_function_coverage=1 00:13:27.579 --rc genhtml_legend=1 00:13:27.579 --rc geninfo_all_blocks=1 00:13:27.579 --rc geninfo_unexecuted_blocks=1 00:13:27.579 00:13:27.579 ' 00:13:27.579 11:57:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:27.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.579 --rc genhtml_branch_coverage=1 00:13:27.579 --rc genhtml_function_coverage=1 00:13:27.579 --rc genhtml_legend=1 00:13:27.579 --rc geninfo_all_blocks=1 00:13:27.579 --rc geninfo_unexecuted_blocks=1 00:13:27.579 00:13:27.579 ' 00:13:27.579 11:57:33 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:27.579 11:57:33 -- nvmf/common.sh@7 -- # uname -s 00:13:27.579 11:57:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.579 11:57:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.579 11:57:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.579 11:57:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.579 11:57:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.579 11:57:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.579 11:57:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.579 11:57:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.579 11:57:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.579 11:57:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.843 11:57:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:13:27.843 11:57:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:13:27.843 11:57:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.843 11:57:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.843 11:57:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:27.843 11:57:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:27.843 11:57:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.843 11:57:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.843 11:57:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.844 11:57:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.844 11:57:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.844 11:57:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.844 11:57:33 -- paths/export.sh@5 -- # export PATH 00:13:27.844 11:57:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.844 11:57:33 -- nvmf/common.sh@46 -- # : 0 00:13:27.844 11:57:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:27.844 11:57:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:27.844 11:57:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:27.844 11:57:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.844 11:57:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.844 11:57:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:27.844 11:57:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:27.844 11:57:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:27.844 11:57:33 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:27.844 11:57:33 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:27.844 11:57:33 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:13:27.844 11:57:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:27.844 11:57:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.844 11:57:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:27.844 11:57:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:27.844 11:57:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:27.844 11:57:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.844 11:57:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.844 11:57:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.844 11:57:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:27.844 11:57:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:27.844 11:57:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:27.844 11:57:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:27.844 11:57:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:27.844 11:57:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:27.844 11:57:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.844 11:57:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.844 11:57:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:27.844 11:57:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:27.844 11:57:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:27.844 11:57:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:27.844 11:57:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:27.844 11:57:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.844 11:57:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:27.844 11:57:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:27.844 11:57:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:27.844 11:57:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:27.844 11:57:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:27.844 11:57:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:27.844 Cannot find device "nvmf_tgt_br" 00:13:27.844 11:57:33 -- nvmf/common.sh@154 -- # true 00:13:27.844 11:57:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:27.844 Cannot find device "nvmf_tgt_br2" 00:13:27.844 11:57:33 -- nvmf/common.sh@155 -- # true 00:13:27.844 11:57:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:27.844 11:57:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:27.844 Cannot find device "nvmf_tgt_br" 00:13:27.844 11:57:33 -- nvmf/common.sh@157 -- # true 00:13:27.844 11:57:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:27.844 Cannot find device "nvmf_tgt_br2" 00:13:27.844 11:57:33 -- nvmf/common.sh@158 -- # true 00:13:27.844 11:57:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:27.844 11:57:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:27.844 11:57:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:27.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.844 11:57:33 -- nvmf/common.sh@161 -- # true 00:13:27.844 11:57:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:27.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.844 11:57:33 -- nvmf/common.sh@162 -- # true 00:13:27.844 11:57:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:27.844 11:57:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:27.844 11:57:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:27.844 11:57:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:27.844 11:57:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:27.844 11:57:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:27.844 11:57:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:27.844 11:57:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:27.844 11:57:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:27.844 11:57:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:27.844 11:57:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:27.844 11:57:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:27.844 11:57:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:27.844 11:57:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:27.844 11:57:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:27.844 11:57:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:27.844 11:57:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:27.844 11:57:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:28.105 11:57:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:28.105 11:57:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:28.105 11:57:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:28.105 11:57:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:28.105 11:57:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:28.105 11:57:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:28.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:13:28.105 00:13:28.105 --- 10.0.0.2 ping statistics --- 00:13:28.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.105 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:13:28.105 11:57:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:28.105 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:28.105 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:13:28.105 00:13:28.105 --- 10.0.0.3 ping statistics --- 00:13:28.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.105 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:28.105 11:57:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:28.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:28.105 00:13:28.105 --- 10.0.0.1 ping statistics --- 00:13:28.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.105 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:28.105 11:57:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.105 11:57:33 -- nvmf/common.sh@421 -- # return 0 00:13:28.105 11:57:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:28.105 11:57:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.105 11:57:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:28.105 11:57:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:28.105 11:57:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.105 11:57:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:28.105 11:57:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:28.105 11:57:33 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:13:28.105 11:57:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:28.105 11:57:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:28.105 11:57:33 -- common/autotest_common.sh@10 -- # set +x 00:13:28.105 11:57:33 -- nvmf/common.sh@469 -- # nvmfpid=73079 00:13:28.105 11:57:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:28.106 11:57:33 -- nvmf/common.sh@470 -- # waitforlisten 73079 00:13:28.106 11:57:33 -- common/autotest_common.sh@829 -- # '[' -z 73079 ']' 00:13:28.106 11:57:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.106 11:57:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:28.106 11:57:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.106 11:57:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:28.106 11:57:33 -- common/autotest_common.sh@10 -- # set +x 00:13:28.106 [2024-11-29 11:57:33.507960] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:28.106 [2024-11-29 11:57:33.508049] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.365 [2024-11-29 11:57:33.639712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.365 [2024-11-29 11:57:33.742718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:28.365 [2024-11-29 11:57:33.742901] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.365 [2024-11-29 11:57:33.742922] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.365 [2024-11-29 11:57:33.742935] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.365 [2024-11-29 11:57:33.742969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.301 11:57:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:29.301 11:57:34 -- common/autotest_common.sh@862 -- # return 0 00:13:29.301 11:57:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:29.301 11:57:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:29.301 11:57:34 -- common/autotest_common.sh@10 -- # set +x 00:13:29.301 11:57:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.301 11:57:34 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:29.559 [2024-11-29 11:57:34.954900] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.559 11:57:34 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:13:29.559 11:57:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:29.559 11:57:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:29.559 11:57:34 -- common/autotest_common.sh@10 -- # set +x 00:13:29.559 ************************************ 00:13:29.559 START TEST lvs_grow_clean 00:13:29.559 ************************************ 00:13:29.559 11:57:34 -- common/autotest_common.sh@1114 -- # lvs_grow 00:13:29.560 11:57:34 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:29.560 11:57:34 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:29.560 11:57:34 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:29.560 11:57:34 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:29.560 11:57:34 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:29.560 11:57:34 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:29.560 11:57:34 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:29.560 11:57:34 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:29.560 11:57:34 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:29.816 11:57:35 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:29.816 11:57:35 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:30.073 11:57:35 -- target/nvmf_lvs_grow.sh@28 -- # lvs=ff7cbd25-46fe-4d2a-94cf-7bcf7bdac031 00:13:30.073 11:57:35 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:30.073 11:57:35 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff7cbd25-46fe-4d2a-94cf-7bcf7bdac031 00:13:30.331 11:57:35 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:30.331 11:57:35 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:30.331 11:57:35 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ff7cbd25-46fe-4d2a-94cf-7bcf7bdac031 lvol 150 00:13:30.903 11:57:36 -- target/nvmf_lvs_grow.sh@33 -- # lvol=9aebdb92-1bdb-40fa-9a43-d488299e9697 00:13:30.903 11:57:36 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:30.903 11:57:36 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:30.903 [2024-11-29 11:57:36.378484] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:30.903 [2024-11-29 11:57:36.378589] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:30.903 true 00:13:30.903 11:57:36 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff7cbd25-46fe-4d2a-94cf-7bcf7bdac031 00:13:30.903 11:57:36 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:31.161 11:57:36 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:31.161 11:57:36 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:31.418 11:57:36 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9aebdb92-1bdb-40fa-9a43-d488299e9697 00:13:31.676 11:57:37 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:32.300 [2024-11-29 11:57:37.451238] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.300 11:57:37 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:32.300 11:57:37 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73167 00:13:32.300 11:57:37 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:32.300 11:57:37 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73167 /var/tmp/bdevperf.sock 00:13:32.300 11:57:37 -- common/autotest_common.sh@829 -- # '[' -z 73167 ']' 00:13:32.300 11:57:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:32.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:32.300 11:57:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.300 11:57:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:32.300 11:57:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.300 11:57:37 -- common/autotest_common.sh@10 -- # set +x 00:13:32.300 11:57:37 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:32.300 [2024-11-29 11:57:37.765268] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:32.300 [2024-11-29 11:57:37.765403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73167 ] 00:13:32.557 [2024-11-29 11:57:37.906275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.557 [2024-11-29 11:57:38.035352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.488 11:57:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.488 11:57:38 -- common/autotest_common.sh@862 -- # return 0 00:13:33.488 11:57:38 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:33.745 Nvme0n1 00:13:33.745 11:57:39 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:34.002 [ 00:13:34.002 { 00:13:34.002 "name": "Nvme0n1", 00:13:34.002 "aliases": [ 00:13:34.002 "9aebdb92-1bdb-40fa-9a43-d488299e9697" 00:13:34.002 ], 00:13:34.002 "product_name": "NVMe disk", 00:13:34.002 "block_size": 4096, 00:13:34.002 "num_blocks": 38912, 00:13:34.002 "uuid": "9aebdb92-1bdb-40fa-9a43-d488299e9697", 00:13:34.002 "assigned_rate_limits": { 00:13:34.002 "rw_ios_per_sec": 0, 00:13:34.002 "rw_mbytes_per_sec": 0, 00:13:34.002 "r_mbytes_per_sec": 0, 00:13:34.002 "w_mbytes_per_sec": 0 00:13:34.002 }, 00:13:34.002 "claimed": false, 00:13:34.002 "zoned": false, 00:13:34.002 "supported_io_types": { 00:13:34.002 "read": true, 00:13:34.002 "write": true, 00:13:34.002 "unmap": true, 00:13:34.002 "write_zeroes": true, 00:13:34.002 "flush": true, 00:13:34.002 "reset": true, 00:13:34.002 "compare": true, 00:13:34.002 "compare_and_write": true, 00:13:34.002 "abort": true, 00:13:34.002 "nvme_admin": true, 00:13:34.002 "nvme_io": true 00:13:34.002 }, 00:13:34.002 "driver_specific": { 00:13:34.002 "nvme": [ 00:13:34.002 { 00:13:34.002 "trid": { 00:13:34.002 "trtype": "TCP", 00:13:34.002 "adrfam": "IPv4", 00:13:34.002 "traddr": "10.0.0.2", 00:13:34.002 "trsvcid": "4420", 00:13:34.002 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:34.002 }, 00:13:34.002 "ctrlr_data": { 00:13:34.002 "cntlid": 1, 00:13:34.002 "vendor_id": "0x8086", 00:13:34.002 "model_number": "SPDK bdev Controller", 00:13:34.002 "serial_number": "SPDK0", 00:13:34.002 "firmware_revision": "24.01.1", 00:13:34.002 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:34.002 "oacs": { 00:13:34.002 "security": 0, 00:13:34.002 "format": 0, 00:13:34.002 "firmware": 0, 00:13:34.002 "ns_manage": 0 00:13:34.002 }, 00:13:34.002 "multi_ctrlr": true, 00:13:34.002 "ana_reporting": false 00:13:34.002 }, 00:13:34.002 "vs": { 00:13:34.002 "nvme_version": "1.3" 00:13:34.002 }, 00:13:34.002 "ns_data": { 00:13:34.002 "id": 1, 00:13:34.002 "can_share": true 00:13:34.002 } 00:13:34.002 } 00:13:34.002 ], 00:13:34.002 "mp_policy": "active_passive" 00:13:34.002 } 00:13:34.002 } 00:13:34.002 ] 00:13:34.002 11:57:39 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73191 00:13:34.002 11:57:39 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:34.002 11:57:39 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:34.260 Running I/O for 10 seconds... 00:13:35.194 Latency(us) 00:13:35.194 [2024-11-29T11:57:40.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.194 [2024-11-29T11:57:40.705Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.194 Nvme0n1 : 1.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:13:35.194 [2024-11-29T11:57:40.705Z] =================================================================================================================== 00:13:35.194 [2024-11-29T11:57:40.705Z] Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:13:35.194 00:13:36.127 11:57:41 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ff7cbd25-46fe-4d2a-94cf-7bcf7bdac031 00:13:36.127 [2024-11-29T11:57:41.638Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.127 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:13:36.127 [2024-11-29T11:57:41.638Z] =================================================================================================================== 00:13:36.127 [2024-11-29T11:57:41.638Z] Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:13:36.127 00:13:36.384 true 00:13:36.384 11:57:41 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:36.384 11:57:41 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff7cbd25-46fe-4d2a-94cf-7bcf7bdac031 00:13:36.643 11:57:42 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:36.643 11:57:42 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:36.643 11:57:42 -- target/nvmf_lvs_grow.sh@65 -- # wait 73191 00:13:37.209 [2024-11-29T11:57:42.720Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.209 Nvme0n1 : 3.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:13:37.209 [2024-11-29T11:57:42.720Z] =================================================================================================================== 00:13:37.209 [2024-11-29T11:57:42.720Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:13:37.209 00:13:38.141 [2024-11-29T11:57:43.652Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.141 Nvme0n1 : 4.00 7016.75 27.41 0.00 0.00 0.00 0.00 0.00 00:13:38.141 [2024-11-29T11:57:43.652Z] =================================================================================================================== 00:13:38.141 [2024-11-29T11:57:43.652Z] Total : 7016.75 27.41 0.00 0.00 0.00 0.00 0.00 00:13:38.141 00:13:39.075 [2024-11-29T11:57:44.586Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.075 Nvme0n1 : 5.00 7010.40 27.38 0.00 0.00 0.00 0.00 0.00 00:13:39.075 [2024-11-29T11:57:44.586Z] =================================================================================================================== 00:13:39.075 [2024-11-29T11:57:44.586Z] Total : 7010.40 27.38 0.00 0.00 0.00 0.00 0.00 00:13:39.075 00:13:40.449 [2024-11-29T11:57:45.960Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:40.449 Nvme0n1 : 6.00 6963.83 27.20 0.00 0.00 0.00 0.00 0.00 00:13:40.449 [2024-11-29T11:57:45.960Z] =================================================================================================================== 00:13:40.449 [2024-11-29T11:57:45.960Z] Total : 6963.83 27.20 0.00 0.00 0.00 0.00 0.00 00:13:40.449 00:13:41.382 [2024-11-29T11:57:46.893Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.382 Nvme0n1 : 7.00 6948.71 27.14 0.00 0.00 0.00 0.00 0.00 00:13:41.382 [2024-11-29T11:57:46.893Z] =================================================================================================================== 00:13:41.382 [2024-11-29T11:57:46.893Z] Total : 6948.71 27.14 0.00 0.00 0.00 0.00 0.00 00:13:41.382 00:13:42.315 [2024-11-29T11:57:47.826Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.315 Nvme0n1 : 8.00 6937.38 27.10 0.00 0.00 0.00 0.00 0.00 00:13:42.315 [2024-11-29T11:57:47.826Z] =================================================================================================================== 00:13:42.315 [2024-11-29T11:57:47.826Z] Total : 6937.38 27.10 0.00 0.00 0.00 0.00 0.00 00:13:42.315 00:13:43.308 [2024-11-29T11:57:48.819Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:43.308 Nvme0n1 : 9.00 6928.56 27.06 0.00 0.00 0.00 0.00 0.00 00:13:43.308 [2024-11-29T11:57:48.819Z] =================================================================================================================== 00:13:43.308 [2024-11-29T11:57:48.819Z] Total : 6928.56 27.06 0.00 0.00 0.00 0.00 0.00 00:13:43.308 00:13:44.240 [2024-11-29T11:57:49.751Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:44.240 Nvme0n1 : 10.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:13:44.240 [2024-11-29T11:57:49.751Z] =================================================================================================================== 00:13:44.240 [2024-11-29T11:57:49.751Z] Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:13:44.240 00:13:44.240 00:13:44.240 Latency(us) 00:13:44.240 [2024-11-29T11:57:49.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.240 [2024-11-29T11:57:49.751Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:44.240 Nvme0n1 : 10.00 6919.44 27.03 0.00 0.00 18491.60 16324.42 49092.42 00:13:44.240 [2024-11-29T11:57:49.751Z] =================================================================================================================== 00:13:44.240 [2024-11-29T11:57:49.751Z] Total : 6919.44 27.03 0.00 0.00 18491.60 16324.42 49092.42 00:13:44.240 0 00:13:44.240 11:57:49 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73167 00:13:44.240 11:57:49 -- common/autotest_common.sh@936 -- # '[' -z 73167 ']' 00:13:44.240 11:57:49 -- common/autotest_common.sh@940 -- # kill -0 73167 00:13:44.240 11:57:49 -- common/autotest_common.sh@941 -- # uname 00:13:44.240 11:57:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:44.240 11:57:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73167 00:13:44.240 killing process with pid 73167 00:13:44.240 Received shutdown signal, test time was about 10.000000 seconds 00:13:44.240 00:13:44.240 Latency(us) 00:13:44.240 [2024-11-29T11:57:49.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.240 [2024-11-29T11:57:49.751Z] =================================================================================================================== 00:13:44.240 [2024-11-29T11:57:49.751Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:44.240 11:57:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:44.240 11:57:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:44.240 11:57:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73167' 00:13:44.240 11:57:49 -- common/autotest_common.sh@955 -- # kill 73167 00:13:44.240 11:57:49 -- common/autotest_common.sh@960 -- # wait 73167 00:13:44.498 11:57:49 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:44.755 11:57:50 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff7cbd25-46fe-4d2a-94cf-7bcf7bdac031 00:13:44.755 11:57:50 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:13:45.012 11:57:50 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:13:45.012 11:57:50 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:13:45.012 11:57:50 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:45.270 [2024-11-29 11:57:50.693216] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:45.270 11:57:50 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff7cbd25-46fe-4d2a-94cf-7bcf7bdac031 00:13:45.270 11:57:50 -- common/autotest_common.sh@650 -- # local es=0 00:13:45.270 11:57:50 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff7cbd25-46fe-4d2a-94cf-7bcf7bdac031 00:13:45.270 11:57:50 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:45.270 11:57:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.270 11:57:50 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:45.270 11:57:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.270 11:57:50 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:45.270 11:57:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.270 11:57:50 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:45.270 11:57:50 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:45.270 11:57:50 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff7cbd25-46fe-4d2a-94cf-7bcf7bdac031 00:13:45.529 request: 00:13:45.529 { 00:13:45.529 "uuid": "ff7cbd25-46fe-4d2a-94cf-7bcf7bdac031", 00:13:45.529 "method": "bdev_lvol_get_lvstores", 00:13:45.529 "req_id": 1 00:13:45.529 } 00:13:45.529 Got JSON-RPC error response 00:13:45.529 response: 00:13:45.529 { 00:13:45.529 "code": -19, 00:13:45.529 "message": "No such device" 00:13:45.529 } 00:13:45.529 11:57:50 -- common/autotest_common.sh@653 -- # es=1 00:13:45.529 11:57:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:45.529 11:57:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:45.529 11:57:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:45.529 11:57:50 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:45.787 aio_bdev 00:13:45.787 11:57:51 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 9aebdb92-1bdb-40fa-9a43-d488299e9697 00:13:45.787 11:57:51 -- common/autotest_common.sh@897 -- # local bdev_name=9aebdb92-1bdb-40fa-9a43-d488299e9697 00:13:45.787 11:57:51 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:45.787 11:57:51 -- common/autotest_common.sh@899 -- # local i 00:13:45.787 11:57:51 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:45.787 11:57:51 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:45.787 11:57:51 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:46.045 11:57:51 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9aebdb92-1bdb-40fa-9a43-d488299e9697 -t 2000 00:13:46.303 [ 00:13:46.303 { 00:13:46.303 "name": "9aebdb92-1bdb-40fa-9a43-d488299e9697", 00:13:46.303 "aliases": [ 00:13:46.303 "lvs/lvol" 00:13:46.303 ], 00:13:46.303 "product_name": "Logical Volume", 00:13:46.303 "block_size": 4096, 00:13:46.303 "num_blocks": 38912, 00:13:46.303 "uuid": "9aebdb92-1bdb-40fa-9a43-d488299e9697", 00:13:46.303 "assigned_rate_limits": { 00:13:46.303 "rw_ios_per_sec": 0, 00:13:46.303 "rw_mbytes_per_sec": 0, 00:13:46.303 "r_mbytes_per_sec": 0, 00:13:46.303 "w_mbytes_per_sec": 0 00:13:46.303 }, 00:13:46.303 "claimed": false, 00:13:46.303 "zoned": false, 00:13:46.303 "supported_io_types": { 00:13:46.303 "read": true, 00:13:46.303 "write": true, 00:13:46.303 "unmap": true, 00:13:46.303 "write_zeroes": true, 00:13:46.303 "flush": false, 00:13:46.303 "reset": true, 00:13:46.303 "compare": false, 00:13:46.303 "compare_and_write": false, 00:13:46.303 "abort": false, 00:13:46.303 "nvme_admin": false, 00:13:46.303 "nvme_io": false 00:13:46.303 }, 00:13:46.303 "driver_specific": { 00:13:46.303 "lvol": { 00:13:46.303 "lvol_store_uuid": "ff7cbd25-46fe-4d2a-94cf-7bcf7bdac031", 00:13:46.303 "base_bdev": "aio_bdev", 00:13:46.303 "thin_provision": false, 00:13:46.303 "snapshot": false, 00:13:46.303 "clone": false, 00:13:46.303 "esnap_clone": false 00:13:46.303 } 00:13:46.303 } 00:13:46.303 } 00:13:46.303 ] 00:13:46.561 11:57:51 -- common/autotest_common.sh@905 -- # return 0 00:13:46.561 11:57:51 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff7cbd25-46fe-4d2a-94cf-7bcf7bdac031 00:13:46.561 11:57:51 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:13:46.819 11:57:52 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:13:46.819 11:57:52 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff7cbd25-46fe-4d2a-94cf-7bcf7bdac031 00:13:46.819 11:57:52 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:13:47.076 11:57:52 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:13:47.076 11:57:52 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9aebdb92-1bdb-40fa-9a43-d488299e9697 00:13:47.349 11:57:52 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff7cbd25-46fe-4d2a-94cf-7bcf7bdac031 00:13:47.609 11:57:52 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:47.867 11:57:53 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:48.432 ************************************ 00:13:48.432 END TEST lvs_grow_clean 00:13:48.432 ************************************ 00:13:48.432 00:13:48.432 real 0m18.652s 00:13:48.432 user 0m17.622s 00:13:48.432 sys 0m2.675s 00:13:48.432 11:57:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:48.432 11:57:53 -- common/autotest_common.sh@10 -- # set +x 00:13:48.432 11:57:53 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:48.432 11:57:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:48.432 11:57:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:48.432 11:57:53 -- common/autotest_common.sh@10 -- # set +x 00:13:48.432 ************************************ 00:13:48.432 START TEST lvs_grow_dirty 00:13:48.432 ************************************ 00:13:48.432 11:57:53 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:13:48.432 11:57:53 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:48.432 11:57:53 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:48.432 11:57:53 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:48.432 11:57:53 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:48.432 11:57:53 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:48.432 11:57:53 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:48.432 11:57:53 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:48.432 11:57:53 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:48.432 11:57:53 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:48.690 11:57:54 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:48.690 11:57:54 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:48.947 11:57:54 -- target/nvmf_lvs_grow.sh@28 -- # lvs=dfe67a10-19a0-41cc-b0e2-89747d0ca9ee 00:13:48.947 11:57:54 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dfe67a10-19a0-41cc-b0e2-89747d0ca9ee 00:13:48.947 11:57:54 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:49.205 11:57:54 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:49.205 11:57:54 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:49.205 11:57:54 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dfe67a10-19a0-41cc-b0e2-89747d0ca9ee lvol 150 00:13:49.462 11:57:54 -- target/nvmf_lvs_grow.sh@33 -- # lvol=615a35c1-ecfb-4cc5-bf56-54c77d795b2e 00:13:49.462 11:57:54 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:49.462 11:57:54 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:49.719 [2024-11-29 11:57:55.221617] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:49.719 [2024-11-29 11:57:55.221738] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:49.719 true 00:13:49.976 11:57:55 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:49.977 11:57:55 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dfe67a10-19a0-41cc-b0e2-89747d0ca9ee 00:13:50.235 11:57:55 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:50.235 11:57:55 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:50.493 11:57:55 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 615a35c1-ecfb-4cc5-bf56-54c77d795b2e 00:13:50.751 11:57:56 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:51.009 11:57:56 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:51.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:51.267 11:57:56 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73444 00:13:51.267 11:57:56 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:51.267 11:57:56 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:51.267 11:57:56 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73444 /var/tmp/bdevperf.sock 00:13:51.267 11:57:56 -- common/autotest_common.sh@829 -- # '[' -z 73444 ']' 00:13:51.267 11:57:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:51.267 11:57:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.267 11:57:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:51.267 11:57:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.267 11:57:56 -- common/autotest_common.sh@10 -- # set +x 00:13:51.267 [2024-11-29 11:57:56.674943] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:51.267 [2024-11-29 11:57:56.675059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73444 ] 00:13:51.524 [2024-11-29 11:57:56.812616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.524 [2024-11-29 11:57:56.913969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.457 11:57:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.457 11:57:57 -- common/autotest_common.sh@862 -- # return 0 00:13:52.457 11:57:57 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:52.715 Nvme0n1 00:13:52.715 11:57:58 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:52.991 [ 00:13:52.991 { 00:13:52.991 "name": "Nvme0n1", 00:13:52.991 "aliases": [ 00:13:52.991 "615a35c1-ecfb-4cc5-bf56-54c77d795b2e" 00:13:52.991 ], 00:13:52.991 "product_name": "NVMe disk", 00:13:52.991 "block_size": 4096, 00:13:52.991 "num_blocks": 38912, 00:13:52.991 "uuid": "615a35c1-ecfb-4cc5-bf56-54c77d795b2e", 00:13:52.991 "assigned_rate_limits": { 00:13:52.991 "rw_ios_per_sec": 0, 00:13:52.991 "rw_mbytes_per_sec": 0, 00:13:52.991 "r_mbytes_per_sec": 0, 00:13:52.991 "w_mbytes_per_sec": 0 00:13:52.991 }, 00:13:52.991 "claimed": false, 00:13:52.991 "zoned": false, 00:13:52.991 "supported_io_types": { 00:13:52.991 "read": true, 00:13:52.991 "write": true, 00:13:52.991 "unmap": true, 00:13:52.991 "write_zeroes": true, 00:13:52.992 "flush": true, 00:13:52.992 "reset": true, 00:13:52.992 "compare": true, 00:13:52.992 "compare_and_write": true, 00:13:52.992 "abort": true, 00:13:52.992 "nvme_admin": true, 00:13:52.992 "nvme_io": true 00:13:52.992 }, 00:13:52.992 "driver_specific": { 00:13:52.992 "nvme": [ 00:13:52.992 { 00:13:52.992 "trid": { 00:13:52.992 "trtype": "TCP", 00:13:52.992 "adrfam": "IPv4", 00:13:52.992 "traddr": "10.0.0.2", 00:13:52.992 "trsvcid": "4420", 00:13:52.992 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:52.992 }, 00:13:52.992 "ctrlr_data": { 00:13:52.992 "cntlid": 1, 00:13:52.992 "vendor_id": "0x8086", 00:13:52.992 "model_number": "SPDK bdev Controller", 00:13:52.992 "serial_number": "SPDK0", 00:13:52.992 "firmware_revision": "24.01.1", 00:13:52.992 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:52.992 "oacs": { 00:13:52.992 "security": 0, 00:13:52.992 "format": 0, 00:13:52.992 "firmware": 0, 00:13:52.992 "ns_manage": 0 00:13:52.992 }, 00:13:52.992 "multi_ctrlr": true, 00:13:52.992 "ana_reporting": false 00:13:52.992 }, 00:13:52.992 "vs": { 00:13:52.992 "nvme_version": "1.3" 00:13:52.992 }, 00:13:52.992 "ns_data": { 00:13:52.992 "id": 1, 00:13:52.992 "can_share": true 00:13:52.992 } 00:13:52.992 } 00:13:52.992 ], 00:13:52.992 "mp_policy": "active_passive" 00:13:52.992 } 00:13:52.992 } 00:13:52.992 ] 00:13:52.992 11:57:58 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73473 00:13:52.992 11:57:58 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:52.992 11:57:58 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:53.249 Running I/O for 10 seconds... 00:13:54.181 Latency(us) 00:13:54.181 [2024-11-29T11:57:59.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.181 [2024-11-29T11:57:59.692Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:54.181 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:13:54.181 [2024-11-29T11:57:59.692Z] =================================================================================================================== 00:13:54.181 [2024-11-29T11:57:59.692Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:13:54.181 00:13:55.112 11:58:00 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dfe67a10-19a0-41cc-b0e2-89747d0ca9ee 00:13:55.112 [2024-11-29T11:58:00.623Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.112 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:13:55.112 [2024-11-29T11:58:00.623Z] =================================================================================================================== 00:13:55.112 [2024-11-29T11:58:00.623Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:13:55.112 00:13:55.369 true 00:13:55.626 11:58:00 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:55.626 11:58:00 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dfe67a10-19a0-41cc-b0e2-89747d0ca9ee 00:13:55.883 11:58:01 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:55.883 11:58:01 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:55.883 11:58:01 -- target/nvmf_lvs_grow.sh@65 -- # wait 73473 00:13:56.140 [2024-11-29T11:58:01.652Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.141 Nvme0n1 : 3.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:13:56.141 [2024-11-29T11:58:01.652Z] =================================================================================================================== 00:13:56.141 [2024-11-29T11:58:01.652Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:13:56.141 00:13:57.070 [2024-11-29T11:58:02.581Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.070 Nvme0n1 : 4.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:13:57.070 [2024-11-29T11:58:02.581Z] =================================================================================================================== 00:13:57.070 [2024-11-29T11:58:02.581Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:13:57.070 00:13:58.442 [2024-11-29T11:58:03.953Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.442 Nvme0n1 : 5.00 6563.00 25.64 0.00 0.00 0.00 0.00 0.00 00:13:58.442 [2024-11-29T11:58:03.953Z] =================================================================================================================== 00:13:58.442 [2024-11-29T11:58:03.953Z] Total : 6563.00 25.64 0.00 0.00 0.00 0.00 0.00 00:13:58.442 00:13:59.376 [2024-11-29T11:58:04.887Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:59.376 Nvme0n1 : 6.00 6485.17 25.33 0.00 0.00 0.00 0.00 0.00 00:13:59.376 [2024-11-29T11:58:04.887Z] =================================================================================================================== 00:13:59.376 [2024-11-29T11:58:04.887Z] Total : 6485.17 25.33 0.00 0.00 0.00 0.00 0.00 00:13:59.376 00:14:00.309 [2024-11-29T11:58:05.820Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:00.309 Nvme0n1 : 7.00 6429.57 25.12 0.00 0.00 0.00 0.00 0.00 00:14:00.309 [2024-11-29T11:58:05.820Z] =================================================================================================================== 00:14:00.309 [2024-11-29T11:58:05.820Z] Total : 6429.57 25.12 0.00 0.00 0.00 0.00 0.00 00:14:00.309 00:14:01.244 [2024-11-29T11:58:06.755Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.244 Nvme0n1 : 8.00 6435.50 25.14 0.00 0.00 0.00 0.00 0.00 00:14:01.244 [2024-11-29T11:58:06.755Z] =================================================================================================================== 00:14:01.244 [2024-11-29T11:58:06.755Z] Total : 6435.50 25.14 0.00 0.00 0.00 0.00 0.00 00:14:01.244 00:14:02.178 [2024-11-29T11:58:07.689Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:02.178 Nvme0n1 : 9.00 6454.22 25.21 0.00 0.00 0.00 0.00 0.00 00:14:02.178 [2024-11-29T11:58:07.689Z] =================================================================================================================== 00:14:02.178 [2024-11-29T11:58:07.689Z] Total : 6454.22 25.21 0.00 0.00 0.00 0.00 0.00 00:14:02.178 00:14:03.132 [2024-11-29T11:58:08.643Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.132 Nvme0n1 : 10.00 6456.50 25.22 0.00 0.00 0.00 0.00 0.00 00:14:03.132 [2024-11-29T11:58:08.643Z] =================================================================================================================== 00:14:03.132 [2024-11-29T11:58:08.643Z] Total : 6456.50 25.22 0.00 0.00 0.00 0.00 0.00 00:14:03.132 00:14:03.132 00:14:03.132 Latency(us) 00:14:03.132 [2024-11-29T11:58:08.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.132 [2024-11-29T11:58:08.643Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.132 Nvme0n1 : 10.00 6466.61 25.26 0.00 0.00 19786.28 11856.06 166818.91 00:14:03.132 [2024-11-29T11:58:08.643Z] =================================================================================================================== 00:14:03.132 [2024-11-29T11:58:08.643Z] Total : 6466.61 25.26 0.00 0.00 19786.28 11856.06 166818.91 00:14:03.132 0 00:14:03.132 11:58:08 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73444 00:14:03.132 11:58:08 -- common/autotest_common.sh@936 -- # '[' -z 73444 ']' 00:14:03.132 11:58:08 -- common/autotest_common.sh@940 -- # kill -0 73444 00:14:03.132 11:58:08 -- common/autotest_common.sh@941 -- # uname 00:14:03.132 11:58:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:03.132 11:58:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73444 00:14:03.132 killing process with pid 73444 00:14:03.132 Received shutdown signal, test time was about 10.000000 seconds 00:14:03.132 00:14:03.132 Latency(us) 00:14:03.132 [2024-11-29T11:58:08.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.132 [2024-11-29T11:58:08.643Z] =================================================================================================================== 00:14:03.132 [2024-11-29T11:58:08.643Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:03.132 11:58:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:03.132 11:58:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:03.132 11:58:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73444' 00:14:03.132 11:58:08 -- common/autotest_common.sh@955 -- # kill 73444 00:14:03.132 11:58:08 -- common/autotest_common.sh@960 -- # wait 73444 00:14:03.389 11:58:08 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:03.646 11:58:09 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:03.646 11:58:09 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dfe67a10-19a0-41cc-b0e2-89747d0ca9ee 00:14:04.212 11:58:09 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:04.212 11:58:09 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:04.212 11:58:09 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 73079 00:14:04.212 11:58:09 -- target/nvmf_lvs_grow.sh@74 -- # wait 73079 00:14:04.212 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 73079 Killed "${NVMF_APP[@]}" "$@" 00:14:04.212 11:58:09 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:04.212 11:58:09 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:04.212 11:58:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:04.212 11:58:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:04.212 11:58:09 -- common/autotest_common.sh@10 -- # set +x 00:14:04.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.212 11:58:09 -- nvmf/common.sh@469 -- # nvmfpid=73605 00:14:04.212 11:58:09 -- nvmf/common.sh@470 -- # waitforlisten 73605 00:14:04.212 11:58:09 -- common/autotest_common.sh@829 -- # '[' -z 73605 ']' 00:14:04.212 11:58:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.212 11:58:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.212 11:58:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:04.212 11:58:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.212 11:58:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.212 11:58:09 -- common/autotest_common.sh@10 -- # set +x 00:14:04.212 [2024-11-29 11:58:09.526750] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:04.212 [2024-11-29 11:58:09.527074] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.212 [2024-11-29 11:58:09.661480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.470 [2024-11-29 11:58:09.786791] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:04.470 [2024-11-29 11:58:09.787317] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.470 [2024-11-29 11:58:09.787342] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.470 [2024-11-29 11:58:09.787353] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.470 [2024-11-29 11:58:09.787391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.037 11:58:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.037 11:58:10 -- common/autotest_common.sh@862 -- # return 0 00:14:05.037 11:58:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:05.037 11:58:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:05.037 11:58:10 -- common/autotest_common.sh@10 -- # set +x 00:14:05.037 11:58:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.037 11:58:10 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:05.602 [2024-11-29 11:58:10.913869] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:05.602 [2024-11-29 11:58:10.914155] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:05.602 [2024-11-29 11:58:10.914386] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:05.602 11:58:10 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:05.602 11:58:10 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 615a35c1-ecfb-4cc5-bf56-54c77d795b2e 00:14:05.602 11:58:10 -- common/autotest_common.sh@897 -- # local bdev_name=615a35c1-ecfb-4cc5-bf56-54c77d795b2e 00:14:05.602 11:58:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:05.602 11:58:10 -- common/autotest_common.sh@899 -- # local i 00:14:05.602 11:58:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:05.602 11:58:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:05.602 11:58:10 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:05.860 11:58:11 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 615a35c1-ecfb-4cc5-bf56-54c77d795b2e -t 2000 00:14:06.118 [ 00:14:06.118 { 00:14:06.118 "name": "615a35c1-ecfb-4cc5-bf56-54c77d795b2e", 00:14:06.118 "aliases": [ 00:14:06.118 "lvs/lvol" 00:14:06.118 ], 00:14:06.118 "product_name": "Logical Volume", 00:14:06.118 "block_size": 4096, 00:14:06.118 "num_blocks": 38912, 00:14:06.118 "uuid": "615a35c1-ecfb-4cc5-bf56-54c77d795b2e", 00:14:06.118 "assigned_rate_limits": { 00:14:06.118 "rw_ios_per_sec": 0, 00:14:06.118 "rw_mbytes_per_sec": 0, 00:14:06.118 "r_mbytes_per_sec": 0, 00:14:06.118 "w_mbytes_per_sec": 0 00:14:06.118 }, 00:14:06.118 "claimed": false, 00:14:06.118 "zoned": false, 00:14:06.118 "supported_io_types": { 00:14:06.118 "read": true, 00:14:06.118 "write": true, 00:14:06.118 "unmap": true, 00:14:06.118 "write_zeroes": true, 00:14:06.118 "flush": false, 00:14:06.118 "reset": true, 00:14:06.118 "compare": false, 00:14:06.118 "compare_and_write": false, 00:14:06.118 "abort": false, 00:14:06.119 "nvme_admin": false, 00:14:06.119 "nvme_io": false 00:14:06.119 }, 00:14:06.119 "driver_specific": { 00:14:06.119 "lvol": { 00:14:06.119 "lvol_store_uuid": "dfe67a10-19a0-41cc-b0e2-89747d0ca9ee", 00:14:06.119 "base_bdev": "aio_bdev", 00:14:06.119 "thin_provision": false, 00:14:06.119 "snapshot": false, 00:14:06.119 "clone": false, 00:14:06.119 "esnap_clone": false 00:14:06.119 } 00:14:06.119 } 00:14:06.119 } 00:14:06.119 ] 00:14:06.119 11:58:11 -- common/autotest_common.sh@905 -- # return 0 00:14:06.119 11:58:11 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dfe67a10-19a0-41cc-b0e2-89747d0ca9ee 00:14:06.119 11:58:11 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:06.376 11:58:11 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:06.376 11:58:11 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:06.376 11:58:11 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dfe67a10-19a0-41cc-b0e2-89747d0ca9ee 00:14:06.681 11:58:12 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:06.681 11:58:12 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:06.938 [2024-11-29 11:58:12.335156] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:06.938 11:58:12 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dfe67a10-19a0-41cc-b0e2-89747d0ca9ee 00:14:06.938 11:58:12 -- common/autotest_common.sh@650 -- # local es=0 00:14:06.938 11:58:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dfe67a10-19a0-41cc-b0e2-89747d0ca9ee 00:14:06.938 11:58:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.939 11:58:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.939 11:58:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.939 11:58:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.939 11:58:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.939 11:58:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.939 11:58:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.939 11:58:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:06.939 11:58:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dfe67a10-19a0-41cc-b0e2-89747d0ca9ee 00:14:07.196 request: 00:14:07.196 { 00:14:07.196 "uuid": "dfe67a10-19a0-41cc-b0e2-89747d0ca9ee", 00:14:07.196 "method": "bdev_lvol_get_lvstores", 00:14:07.196 "req_id": 1 00:14:07.196 } 00:14:07.196 Got JSON-RPC error response 00:14:07.196 response: 00:14:07.196 { 00:14:07.196 "code": -19, 00:14:07.196 "message": "No such device" 00:14:07.196 } 00:14:07.196 11:58:12 -- common/autotest_common.sh@653 -- # es=1 00:14:07.196 11:58:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:07.196 11:58:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:07.196 11:58:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:07.196 11:58:12 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:07.455 aio_bdev 00:14:07.455 11:58:12 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 615a35c1-ecfb-4cc5-bf56-54c77d795b2e 00:14:07.455 11:58:12 -- common/autotest_common.sh@897 -- # local bdev_name=615a35c1-ecfb-4cc5-bf56-54c77d795b2e 00:14:07.455 11:58:12 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:07.455 11:58:12 -- common/autotest_common.sh@899 -- # local i 00:14:07.455 11:58:12 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:07.455 11:58:12 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:07.455 11:58:12 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:07.712 11:58:13 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 615a35c1-ecfb-4cc5-bf56-54c77d795b2e -t 2000 00:14:07.970 [ 00:14:07.970 { 00:14:07.970 "name": "615a35c1-ecfb-4cc5-bf56-54c77d795b2e", 00:14:07.970 "aliases": [ 00:14:07.970 "lvs/lvol" 00:14:07.970 ], 00:14:07.970 "product_name": "Logical Volume", 00:14:07.970 "block_size": 4096, 00:14:07.970 "num_blocks": 38912, 00:14:07.970 "uuid": "615a35c1-ecfb-4cc5-bf56-54c77d795b2e", 00:14:07.970 "assigned_rate_limits": { 00:14:07.970 "rw_ios_per_sec": 0, 00:14:07.970 "rw_mbytes_per_sec": 0, 00:14:07.970 "r_mbytes_per_sec": 0, 00:14:07.970 "w_mbytes_per_sec": 0 00:14:07.970 }, 00:14:07.970 "claimed": false, 00:14:07.970 "zoned": false, 00:14:07.970 "supported_io_types": { 00:14:07.970 "read": true, 00:14:07.970 "write": true, 00:14:07.970 "unmap": true, 00:14:07.970 "write_zeroes": true, 00:14:07.970 "flush": false, 00:14:07.970 "reset": true, 00:14:07.970 "compare": false, 00:14:07.970 "compare_and_write": false, 00:14:07.970 "abort": false, 00:14:07.970 "nvme_admin": false, 00:14:07.970 "nvme_io": false 00:14:07.970 }, 00:14:07.970 "driver_specific": { 00:14:07.970 "lvol": { 00:14:07.970 "lvol_store_uuid": "dfe67a10-19a0-41cc-b0e2-89747d0ca9ee", 00:14:07.970 "base_bdev": "aio_bdev", 00:14:07.970 "thin_provision": false, 00:14:07.970 "snapshot": false, 00:14:07.970 "clone": false, 00:14:07.970 "esnap_clone": false 00:14:07.970 } 00:14:07.970 } 00:14:07.970 } 00:14:07.970 ] 00:14:07.970 11:58:13 -- common/autotest_common.sh@905 -- # return 0 00:14:07.970 11:58:13 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dfe67a10-19a0-41cc-b0e2-89747d0ca9ee 00:14:07.970 11:58:13 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:08.228 11:58:13 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:08.228 11:58:13 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:08.228 11:58:13 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dfe67a10-19a0-41cc-b0e2-89747d0ca9ee 00:14:08.487 11:58:13 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:08.487 11:58:13 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 615a35c1-ecfb-4cc5-bf56-54c77d795b2e 00:14:08.746 11:58:14 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dfe67a10-19a0-41cc-b0e2-89747d0ca9ee 00:14:09.313 11:58:14 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:09.572 11:58:14 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:09.830 ************************************ 00:14:09.830 END TEST lvs_grow_dirty 00:14:09.830 ************************************ 00:14:09.830 00:14:09.830 real 0m21.592s 00:14:09.830 user 0m45.556s 00:14:09.830 sys 0m7.987s 00:14:09.830 11:58:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:09.830 11:58:15 -- common/autotest_common.sh@10 -- # set +x 00:14:09.830 11:58:15 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:09.830 11:58:15 -- common/autotest_common.sh@806 -- # type=--id 00:14:09.830 11:58:15 -- common/autotest_common.sh@807 -- # id=0 00:14:09.830 11:58:15 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:09.830 11:58:15 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:10.088 11:58:15 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:10.088 11:58:15 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:10.088 11:58:15 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:10.088 11:58:15 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:10.088 nvmf_trace.0 00:14:10.088 11:58:15 -- common/autotest_common.sh@821 -- # return 0 00:14:10.088 11:58:15 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:10.088 11:58:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:10.088 11:58:15 -- nvmf/common.sh@116 -- # sync 00:14:10.088 11:58:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:10.088 11:58:15 -- nvmf/common.sh@119 -- # set +e 00:14:10.088 11:58:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:10.088 11:58:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:10.088 rmmod nvme_tcp 00:14:10.088 rmmod nvme_fabrics 00:14:10.088 rmmod nvme_keyring 00:14:10.088 11:58:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:10.088 11:58:15 -- nvmf/common.sh@123 -- # set -e 00:14:10.088 11:58:15 -- nvmf/common.sh@124 -- # return 0 00:14:10.088 11:58:15 -- nvmf/common.sh@477 -- # '[' -n 73605 ']' 00:14:10.088 11:58:15 -- nvmf/common.sh@478 -- # killprocess 73605 00:14:10.088 11:58:15 -- common/autotest_common.sh@936 -- # '[' -z 73605 ']' 00:14:10.088 11:58:15 -- common/autotest_common.sh@940 -- # kill -0 73605 00:14:10.088 11:58:15 -- common/autotest_common.sh@941 -- # uname 00:14:10.088 11:58:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:10.088 11:58:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73605 00:14:10.347 killing process with pid 73605 00:14:10.347 11:58:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:10.347 11:58:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:10.347 11:58:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73605' 00:14:10.347 11:58:15 -- common/autotest_common.sh@955 -- # kill 73605 00:14:10.347 11:58:15 -- common/autotest_common.sh@960 -- # wait 73605 00:14:10.605 11:58:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:10.605 11:58:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:10.605 11:58:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:10.605 11:58:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.605 11:58:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:10.605 11:58:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.605 11:58:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.605 11:58:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.605 11:58:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:10.605 ************************************ 00:14:10.605 END TEST nvmf_lvs_grow 00:14:10.605 ************************************ 00:14:10.605 00:14:10.605 real 0m43.100s 00:14:10.605 user 1m10.352s 00:14:10.605 sys 0m11.420s 00:14:10.605 11:58:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:10.605 11:58:15 -- common/autotest_common.sh@10 -- # set +x 00:14:10.605 11:58:15 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:10.605 11:58:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:10.605 11:58:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:10.605 11:58:15 -- common/autotest_common.sh@10 -- # set +x 00:14:10.605 ************************************ 00:14:10.605 START TEST nvmf_bdev_io_wait 00:14:10.606 ************************************ 00:14:10.606 11:58:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:10.606 * Looking for test storage... 00:14:10.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:10.606 11:58:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:10.606 11:58:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:10.606 11:58:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:10.865 11:58:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:10.865 11:58:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:10.865 11:58:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:10.865 11:58:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:10.865 11:58:16 -- scripts/common.sh@335 -- # IFS=.-: 00:14:10.865 11:58:16 -- scripts/common.sh@335 -- # read -ra ver1 00:14:10.865 11:58:16 -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.865 11:58:16 -- scripts/common.sh@336 -- # read -ra ver2 00:14:10.865 11:58:16 -- scripts/common.sh@337 -- # local 'op=<' 00:14:10.865 11:58:16 -- scripts/common.sh@339 -- # ver1_l=2 00:14:10.865 11:58:16 -- scripts/common.sh@340 -- # ver2_l=1 00:14:10.865 11:58:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:10.865 11:58:16 -- scripts/common.sh@343 -- # case "$op" in 00:14:10.865 11:58:16 -- scripts/common.sh@344 -- # : 1 00:14:10.865 11:58:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:10.865 11:58:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.865 11:58:16 -- scripts/common.sh@364 -- # decimal 1 00:14:10.865 11:58:16 -- scripts/common.sh@352 -- # local d=1 00:14:10.865 11:58:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.865 11:58:16 -- scripts/common.sh@354 -- # echo 1 00:14:10.865 11:58:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:10.865 11:58:16 -- scripts/common.sh@365 -- # decimal 2 00:14:10.865 11:58:16 -- scripts/common.sh@352 -- # local d=2 00:14:10.865 11:58:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.865 11:58:16 -- scripts/common.sh@354 -- # echo 2 00:14:10.865 11:58:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:10.865 11:58:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:10.865 11:58:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:10.865 11:58:16 -- scripts/common.sh@367 -- # return 0 00:14:10.865 11:58:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.865 11:58:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:10.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.865 --rc genhtml_branch_coverage=1 00:14:10.865 --rc genhtml_function_coverage=1 00:14:10.865 --rc genhtml_legend=1 00:14:10.865 --rc geninfo_all_blocks=1 00:14:10.865 --rc geninfo_unexecuted_blocks=1 00:14:10.865 00:14:10.865 ' 00:14:10.865 11:58:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:10.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.865 --rc genhtml_branch_coverage=1 00:14:10.865 --rc genhtml_function_coverage=1 00:14:10.865 --rc genhtml_legend=1 00:14:10.865 --rc geninfo_all_blocks=1 00:14:10.865 --rc geninfo_unexecuted_blocks=1 00:14:10.865 00:14:10.865 ' 00:14:10.865 11:58:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:10.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.865 --rc genhtml_branch_coverage=1 00:14:10.865 --rc genhtml_function_coverage=1 00:14:10.865 --rc genhtml_legend=1 00:14:10.865 --rc geninfo_all_blocks=1 00:14:10.865 --rc geninfo_unexecuted_blocks=1 00:14:10.865 00:14:10.865 ' 00:14:10.865 11:58:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:10.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.865 --rc genhtml_branch_coverage=1 00:14:10.865 --rc genhtml_function_coverage=1 00:14:10.865 --rc genhtml_legend=1 00:14:10.865 --rc geninfo_all_blocks=1 00:14:10.865 --rc geninfo_unexecuted_blocks=1 00:14:10.865 00:14:10.865 ' 00:14:10.865 11:58:16 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:10.865 11:58:16 -- nvmf/common.sh@7 -- # uname -s 00:14:10.865 11:58:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.865 11:58:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.865 11:58:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.865 11:58:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.865 11:58:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.865 11:58:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.865 11:58:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.865 11:58:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.865 11:58:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.865 11:58:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.865 11:58:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:14:10.865 11:58:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:14:10.865 11:58:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.865 11:58:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.865 11:58:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:10.865 11:58:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:10.865 11:58:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.865 11:58:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.865 11:58:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.865 11:58:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.865 11:58:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.865 11:58:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.865 11:58:16 -- paths/export.sh@5 -- # export PATH 00:14:10.865 11:58:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.865 11:58:16 -- nvmf/common.sh@46 -- # : 0 00:14:10.865 11:58:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:10.865 11:58:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:10.865 11:58:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:10.865 11:58:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.865 11:58:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.865 11:58:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:10.865 11:58:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:10.865 11:58:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:10.865 11:58:16 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:10.865 11:58:16 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:10.865 11:58:16 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:10.865 11:58:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:10.865 11:58:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.865 11:58:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:10.865 11:58:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:10.865 11:58:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:10.865 11:58:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.865 11:58:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.865 11:58:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.865 11:58:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:10.865 11:58:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:10.865 11:58:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:10.865 11:58:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:10.865 11:58:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:10.865 11:58:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:10.865 11:58:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.865 11:58:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.865 11:58:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:10.865 11:58:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:10.865 11:58:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:10.866 11:58:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:10.866 11:58:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:10.866 11:58:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.866 11:58:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:10.866 11:58:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:10.866 11:58:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:10.866 11:58:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:10.866 11:58:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:10.866 11:58:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:10.866 Cannot find device "nvmf_tgt_br" 00:14:10.866 11:58:16 -- nvmf/common.sh@154 -- # true 00:14:10.866 11:58:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:10.866 Cannot find device "nvmf_tgt_br2" 00:14:10.866 11:58:16 -- nvmf/common.sh@155 -- # true 00:14:10.866 11:58:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:10.866 11:58:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:10.866 Cannot find device "nvmf_tgt_br" 00:14:10.866 11:58:16 -- nvmf/common.sh@157 -- # true 00:14:10.866 11:58:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:10.866 Cannot find device "nvmf_tgt_br2" 00:14:10.866 11:58:16 -- nvmf/common.sh@158 -- # true 00:14:10.866 11:58:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:10.866 11:58:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:10.866 11:58:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.866 11:58:16 -- nvmf/common.sh@161 -- # true 00:14:10.866 11:58:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.866 11:58:16 -- nvmf/common.sh@162 -- # true 00:14:10.866 11:58:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:11.124 11:58:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:11.124 11:58:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:11.124 11:58:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:11.124 11:58:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:11.124 11:58:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:11.124 11:58:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:11.124 11:58:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:11.124 11:58:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:11.124 11:58:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:11.124 11:58:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:11.124 11:58:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:11.124 11:58:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:11.124 11:58:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:11.124 11:58:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:11.124 11:58:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:11.124 11:58:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:11.124 11:58:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:11.124 11:58:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:11.124 11:58:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:11.124 11:58:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:11.124 11:58:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:11.124 11:58:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:11.124 11:58:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:11.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:14:11.124 00:14:11.124 --- 10.0.0.2 ping statistics --- 00:14:11.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.124 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:11.124 11:58:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:11.124 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:11.124 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:14:11.124 00:14:11.124 --- 10.0.0.3 ping statistics --- 00:14:11.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.124 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:11.124 11:58:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:11.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:14:11.124 00:14:11.124 --- 10.0.0.1 ping statistics --- 00:14:11.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.124 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:11.124 11:58:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.124 11:58:16 -- nvmf/common.sh@421 -- # return 0 00:14:11.124 11:58:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:11.124 11:58:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.124 11:58:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:11.124 11:58:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:11.124 11:58:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.124 11:58:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:11.124 11:58:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:11.124 11:58:16 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:11.124 11:58:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:11.124 11:58:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:11.124 11:58:16 -- common/autotest_common.sh@10 -- # set +x 00:14:11.124 11:58:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:11.124 11:58:16 -- nvmf/common.sh@469 -- # nvmfpid=73928 00:14:11.124 11:58:16 -- nvmf/common.sh@470 -- # waitforlisten 73928 00:14:11.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.124 11:58:16 -- common/autotest_common.sh@829 -- # '[' -z 73928 ']' 00:14:11.124 11:58:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.124 11:58:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.124 11:58:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.124 11:58:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.124 11:58:16 -- common/autotest_common.sh@10 -- # set +x 00:14:11.124 [2024-11-29 11:58:16.625812] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:11.124 [2024-11-29 11:58:16.626157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.382 [2024-11-29 11:58:16.765735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.640 [2024-11-29 11:58:16.898587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:11.640 [2024-11-29 11:58:16.898744] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.640 [2024-11-29 11:58:16.898758] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.640 [2024-11-29 11:58:16.898767] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.640 [2024-11-29 11:58:16.898911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.640 [2024-11-29 11:58:16.899052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.640 [2024-11-29 11:58:16.899979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.640 [2024-11-29 11:58:16.899986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.206 11:58:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:12.206 11:58:17 -- common/autotest_common.sh@862 -- # return 0 00:14:12.206 11:58:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:12.206 11:58:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:12.206 11:58:17 -- common/autotest_common.sh@10 -- # set +x 00:14:12.206 11:58:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.206 11:58:17 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:12.206 11:58:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.206 11:58:17 -- common/autotest_common.sh@10 -- # set +x 00:14:12.206 11:58:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.206 11:58:17 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:12.206 11:58:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.206 11:58:17 -- common/autotest_common.sh@10 -- # set +x 00:14:12.465 11:58:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.465 11:58:17 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:12.465 11:58:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.465 11:58:17 -- common/autotest_common.sh@10 -- # set +x 00:14:12.465 [2024-11-29 11:58:17.776233] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.465 11:58:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.465 11:58:17 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:12.465 11:58:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.465 11:58:17 -- common/autotest_common.sh@10 -- # set +x 00:14:12.465 Malloc0 00:14:12.465 11:58:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.465 11:58:17 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:12.465 11:58:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.465 11:58:17 -- common/autotest_common.sh@10 -- # set +x 00:14:12.465 11:58:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.465 11:58:17 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:12.465 11:58:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.465 11:58:17 -- common/autotest_common.sh@10 -- # set +x 00:14:12.465 11:58:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.465 11:58:17 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:12.465 11:58:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.465 11:58:17 -- common/autotest_common.sh@10 -- # set +x 00:14:12.465 [2024-11-29 11:58:17.850551] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.465 11:58:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.465 11:58:17 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73967 00:14:12.465 11:58:17 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:12.465 11:58:17 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:12.465 11:58:17 -- nvmf/common.sh@520 -- # config=() 00:14:12.465 11:58:17 -- nvmf/common.sh@520 -- # local subsystem config 00:14:12.465 11:58:17 -- target/bdev_io_wait.sh@30 -- # READ_PID=73969 00:14:12.465 11:58:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:12.465 11:58:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:12.465 { 00:14:12.465 "params": { 00:14:12.465 "name": "Nvme$subsystem", 00:14:12.465 "trtype": "$TEST_TRANSPORT", 00:14:12.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:12.465 "adrfam": "ipv4", 00:14:12.465 "trsvcid": "$NVMF_PORT", 00:14:12.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:12.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:12.465 "hdgst": ${hdgst:-false}, 00:14:12.465 "ddgst": ${ddgst:-false} 00:14:12.465 }, 00:14:12.465 "method": "bdev_nvme_attach_controller" 00:14:12.465 } 00:14:12.465 EOF 00:14:12.465 )") 00:14:12.465 11:58:17 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:12.465 11:58:17 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:12.465 11:58:17 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73972 00:14:12.465 11:58:17 -- nvmf/common.sh@520 -- # config=() 00:14:12.465 11:58:17 -- nvmf/common.sh@520 -- # local subsystem config 00:14:12.465 11:58:17 -- nvmf/common.sh@542 -- # cat 00:14:12.465 11:58:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:12.465 11:58:17 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73973 00:14:12.465 11:58:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:12.465 { 00:14:12.465 "params": { 00:14:12.465 "name": "Nvme$subsystem", 00:14:12.465 "trtype": "$TEST_TRANSPORT", 00:14:12.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:12.465 "adrfam": "ipv4", 00:14:12.465 "trsvcid": "$NVMF_PORT", 00:14:12.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:12.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:12.465 "hdgst": ${hdgst:-false}, 00:14:12.466 "ddgst": ${ddgst:-false} 00:14:12.466 }, 00:14:12.466 "method": "bdev_nvme_attach_controller" 00:14:12.466 } 00:14:12.466 EOF 00:14:12.466 )") 00:14:12.466 11:58:17 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:12.466 11:58:17 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:12.466 11:58:17 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:12.466 11:58:17 -- nvmf/common.sh@520 -- # config=() 00:14:12.466 11:58:17 -- nvmf/common.sh@520 -- # local subsystem config 00:14:12.466 11:58:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:12.466 11:58:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:12.466 { 00:14:12.466 "params": { 00:14:12.466 "name": "Nvme$subsystem", 00:14:12.466 "trtype": "$TEST_TRANSPORT", 00:14:12.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:12.466 "adrfam": "ipv4", 00:14:12.466 "trsvcid": "$NVMF_PORT", 00:14:12.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:12.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:12.466 "hdgst": ${hdgst:-false}, 00:14:12.466 "ddgst": ${ddgst:-false} 00:14:12.466 }, 00:14:12.466 "method": "bdev_nvme_attach_controller" 00:14:12.466 } 00:14:12.466 EOF 00:14:12.466 )") 00:14:12.466 11:58:17 -- nvmf/common.sh@544 -- # jq . 00:14:12.466 11:58:17 -- target/bdev_io_wait.sh@35 -- # sync 00:14:12.466 11:58:17 -- nvmf/common.sh@542 -- # cat 00:14:12.466 11:58:17 -- nvmf/common.sh@542 -- # cat 00:14:12.466 11:58:17 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:12.466 11:58:17 -- nvmf/common.sh@520 -- # config=() 00:14:12.466 11:58:17 -- nvmf/common.sh@520 -- # local subsystem config 00:14:12.466 11:58:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:12.466 11:58:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:12.466 { 00:14:12.466 "params": { 00:14:12.466 "name": "Nvme$subsystem", 00:14:12.466 "trtype": "$TEST_TRANSPORT", 00:14:12.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:12.466 "adrfam": "ipv4", 00:14:12.466 "trsvcid": "$NVMF_PORT", 00:14:12.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:12.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:12.466 "hdgst": ${hdgst:-false}, 00:14:12.466 "ddgst": ${ddgst:-false} 00:14:12.466 }, 00:14:12.466 "method": "bdev_nvme_attach_controller" 00:14:12.466 } 00:14:12.466 EOF 00:14:12.466 )") 00:14:12.466 11:58:17 -- nvmf/common.sh@545 -- # IFS=, 00:14:12.466 11:58:17 -- nvmf/common.sh@542 -- # cat 00:14:12.466 11:58:17 -- nvmf/common.sh@544 -- # jq . 00:14:12.466 11:58:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:12.466 "params": { 00:14:12.466 "name": "Nvme1", 00:14:12.466 "trtype": "tcp", 00:14:12.466 "traddr": "10.0.0.2", 00:14:12.466 "adrfam": "ipv4", 00:14:12.466 "trsvcid": "4420", 00:14:12.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.466 "hdgst": false, 00:14:12.466 "ddgst": false 00:14:12.466 }, 00:14:12.466 "method": "bdev_nvme_attach_controller" 00:14:12.466 }' 00:14:12.466 11:58:17 -- nvmf/common.sh@544 -- # jq . 00:14:12.466 11:58:17 -- nvmf/common.sh@545 -- # IFS=, 00:14:12.466 11:58:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:12.466 "params": { 00:14:12.466 "name": "Nvme1", 00:14:12.466 "trtype": "tcp", 00:14:12.466 "traddr": "10.0.0.2", 00:14:12.466 "adrfam": "ipv4", 00:14:12.466 "trsvcid": "4420", 00:14:12.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.466 "hdgst": false, 00:14:12.466 "ddgst": false 00:14:12.466 }, 00:14:12.466 "method": "bdev_nvme_attach_controller" 00:14:12.466 }' 00:14:12.466 11:58:17 -- nvmf/common.sh@545 -- # IFS=, 00:14:12.466 11:58:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:12.466 "params": { 00:14:12.466 "name": "Nvme1", 00:14:12.466 "trtype": "tcp", 00:14:12.466 "traddr": "10.0.0.2", 00:14:12.466 "adrfam": "ipv4", 00:14:12.466 "trsvcid": "4420", 00:14:12.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.466 "hdgst": false, 00:14:12.466 "ddgst": false 00:14:12.466 }, 00:14:12.466 "method": "bdev_nvme_attach_controller" 00:14:12.466 }' 00:14:12.466 11:58:17 -- nvmf/common.sh@544 -- # jq . 00:14:12.466 11:58:17 -- nvmf/common.sh@545 -- # IFS=, 00:14:12.466 11:58:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:12.466 "params": { 00:14:12.466 "name": "Nvme1", 00:14:12.466 "trtype": "tcp", 00:14:12.466 "traddr": "10.0.0.2", 00:14:12.466 "adrfam": "ipv4", 00:14:12.466 "trsvcid": "4420", 00:14:12.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.466 "hdgst": false, 00:14:12.466 "ddgst": false 00:14:12.466 }, 00:14:12.466 "method": "bdev_nvme_attach_controller" 00:14:12.466 }' 00:14:12.466 [2024-11-29 11:58:17.916915] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:12.466 [2024-11-29 11:58:17.917091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:12.466 [2024-11-29 11:58:17.917185] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-29 11:58:17.917209] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:12.466 .cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:12.466 11:58:17 -- target/bdev_io_wait.sh@37 -- # wait 73967 00:14:12.466 [2024-11-29 11:58:17.926320] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:12.466 [2024-11-29 11:58:17.926688] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:12.466 [2024-11-29 11:58:17.944600] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:12.466 [2024-11-29 11:58:17.944699] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:12.724 [2024-11-29 11:58:18.154641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.982 [2024-11-29 11:58:18.247079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:12.982 [2024-11-29 11:58:18.252101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.982 [2024-11-29 11:58:18.334038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.982 [2024-11-29 11:58:18.346654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:12.982 [2024-11-29 11:58:18.409150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:12.982 [2024-11-29 11:58:18.432592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.241 Running I/O for 1 seconds... 00:14:13.241 [2024-11-29 11:58:18.519175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:13.241 Running I/O for 1 seconds... 00:14:13.241 Running I/O for 1 seconds... 00:14:13.241 Running I/O for 1 seconds... 00:14:14.174 00:14:14.174 Latency(us) 00:14:14.174 [2024-11-29T11:58:19.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.174 [2024-11-29T11:58:19.685Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:14.174 Nvme1n1 : 1.02 5678.50 22.18 0.00 0.00 22293.70 10187.87 35985.22 00:14:14.174 [2024-11-29T11:58:19.685Z] =================================================================================================================== 00:14:14.174 [2024-11-29T11:58:19.685Z] Total : 5678.50 22.18 0.00 0.00 22293.70 10187.87 35985.22 00:14:14.174 00:14:14.175 Latency(us) 00:14:14.175 [2024-11-29T11:58:19.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.175 [2024-11-29T11:58:19.686Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:14.175 Nvme1n1 : 1.00 162447.43 634.56 0.00 0.00 785.21 424.49 1712.87 00:14:14.175 [2024-11-29T11:58:19.686Z] =================================================================================================================== 00:14:14.175 [2024-11-29T11:58:19.686Z] Total : 162447.43 634.56 0.00 0.00 785.21 424.49 1712.87 00:14:14.175 00:14:14.175 Latency(us) 00:14:14.175 [2024-11-29T11:58:19.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.175 [2024-11-29T11:58:19.686Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:14.175 Nvme1n1 : 1.01 5657.59 22.10 0.00 0.00 22544.73 6106.76 41704.73 00:14:14.175 [2024-11-29T11:58:19.686Z] =================================================================================================================== 00:14:14.175 [2024-11-29T11:58:19.686Z] Total : 5657.59 22.10 0.00 0.00 22544.73 6106.76 41704.73 00:14:14.432 00:14:14.432 Latency(us) 00:14:14.432 [2024-11-29T11:58:19.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.432 [2024-11-29T11:58:19.943Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:14.432 Nvme1n1 : 1.01 8695.30 33.97 0.00 0.00 14621.45 8460.10 27644.28 00:14:14.432 [2024-11-29T11:58:19.943Z] =================================================================================================================== 00:14:14.432 [2024-11-29T11:58:19.943Z] Total : 8695.30 33.97 0.00 0.00 14621.45 8460.10 27644.28 00:14:14.690 11:58:20 -- target/bdev_io_wait.sh@38 -- # wait 73969 00:14:14.690 11:58:20 -- target/bdev_io_wait.sh@39 -- # wait 73972 00:14:14.690 11:58:20 -- target/bdev_io_wait.sh@40 -- # wait 73973 00:14:14.690 11:58:20 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.690 11:58:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.690 11:58:20 -- common/autotest_common.sh@10 -- # set +x 00:14:14.690 11:58:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.690 11:58:20 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:14.690 11:58:20 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:14.690 11:58:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:14.690 11:58:20 -- nvmf/common.sh@116 -- # sync 00:14:14.690 11:58:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:14.690 11:58:20 -- nvmf/common.sh@119 -- # set +e 00:14:14.690 11:58:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:14.690 11:58:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:14.690 rmmod nvme_tcp 00:14:14.690 rmmod nvme_fabrics 00:14:14.690 rmmod nvme_keyring 00:14:14.690 11:58:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:14.690 11:58:20 -- nvmf/common.sh@123 -- # set -e 00:14:14.690 11:58:20 -- nvmf/common.sh@124 -- # return 0 00:14:14.690 11:58:20 -- nvmf/common.sh@477 -- # '[' -n 73928 ']' 00:14:14.690 11:58:20 -- nvmf/common.sh@478 -- # killprocess 73928 00:14:14.690 11:58:20 -- common/autotest_common.sh@936 -- # '[' -z 73928 ']' 00:14:14.690 11:58:20 -- common/autotest_common.sh@940 -- # kill -0 73928 00:14:14.690 11:58:20 -- common/autotest_common.sh@941 -- # uname 00:14:14.691 11:58:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:14.691 11:58:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73928 00:14:14.691 killing process with pid 73928 00:14:14.691 11:58:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:14.691 11:58:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:14.691 11:58:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73928' 00:14:14.691 11:58:20 -- common/autotest_common.sh@955 -- # kill 73928 00:14:14.691 11:58:20 -- common/autotest_common.sh@960 -- # wait 73928 00:14:14.948 11:58:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:14.948 11:58:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:14.948 11:58:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:14.948 11:58:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:14.948 11:58:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:14.948 11:58:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.948 11:58:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.948 11:58:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.206 11:58:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:15.206 00:14:15.206 real 0m4.481s 00:14:15.206 user 0m19.177s 00:14:15.206 sys 0m2.371s 00:14:15.206 ************************************ 00:14:15.206 END TEST nvmf_bdev_io_wait 00:14:15.206 ************************************ 00:14:15.206 11:58:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:15.206 11:58:20 -- common/autotest_common.sh@10 -- # set +x 00:14:15.206 11:58:20 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:15.206 11:58:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:15.206 11:58:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:15.206 11:58:20 -- common/autotest_common.sh@10 -- # set +x 00:14:15.206 ************************************ 00:14:15.206 START TEST nvmf_queue_depth 00:14:15.206 ************************************ 00:14:15.206 11:58:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:15.206 * Looking for test storage... 00:14:15.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:15.206 11:58:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:15.206 11:58:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:15.206 11:58:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:15.206 11:58:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:15.206 11:58:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:15.206 11:58:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:15.206 11:58:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:15.206 11:58:20 -- scripts/common.sh@335 -- # IFS=.-: 00:14:15.206 11:58:20 -- scripts/common.sh@335 -- # read -ra ver1 00:14:15.206 11:58:20 -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.206 11:58:20 -- scripts/common.sh@336 -- # read -ra ver2 00:14:15.206 11:58:20 -- scripts/common.sh@337 -- # local 'op=<' 00:14:15.206 11:58:20 -- scripts/common.sh@339 -- # ver1_l=2 00:14:15.206 11:58:20 -- scripts/common.sh@340 -- # ver2_l=1 00:14:15.206 11:58:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:15.206 11:58:20 -- scripts/common.sh@343 -- # case "$op" in 00:14:15.206 11:58:20 -- scripts/common.sh@344 -- # : 1 00:14:15.206 11:58:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:15.206 11:58:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.206 11:58:20 -- scripts/common.sh@364 -- # decimal 1 00:14:15.206 11:58:20 -- scripts/common.sh@352 -- # local d=1 00:14:15.206 11:58:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.206 11:58:20 -- scripts/common.sh@354 -- # echo 1 00:14:15.206 11:58:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:15.206 11:58:20 -- scripts/common.sh@365 -- # decimal 2 00:14:15.206 11:58:20 -- scripts/common.sh@352 -- # local d=2 00:14:15.206 11:58:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.206 11:58:20 -- scripts/common.sh@354 -- # echo 2 00:14:15.206 11:58:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:15.206 11:58:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:15.206 11:58:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:15.206 11:58:20 -- scripts/common.sh@367 -- # return 0 00:14:15.206 11:58:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.206 11:58:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:15.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.206 --rc genhtml_branch_coverage=1 00:14:15.206 --rc genhtml_function_coverage=1 00:14:15.206 --rc genhtml_legend=1 00:14:15.206 --rc geninfo_all_blocks=1 00:14:15.206 --rc geninfo_unexecuted_blocks=1 00:14:15.206 00:14:15.206 ' 00:14:15.206 11:58:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:15.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.206 --rc genhtml_branch_coverage=1 00:14:15.206 --rc genhtml_function_coverage=1 00:14:15.206 --rc genhtml_legend=1 00:14:15.206 --rc geninfo_all_blocks=1 00:14:15.206 --rc geninfo_unexecuted_blocks=1 00:14:15.206 00:14:15.206 ' 00:14:15.206 11:58:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:15.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.206 --rc genhtml_branch_coverage=1 00:14:15.206 --rc genhtml_function_coverage=1 00:14:15.206 --rc genhtml_legend=1 00:14:15.206 --rc geninfo_all_blocks=1 00:14:15.206 --rc geninfo_unexecuted_blocks=1 00:14:15.206 00:14:15.206 ' 00:14:15.207 11:58:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:15.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.207 --rc genhtml_branch_coverage=1 00:14:15.207 --rc genhtml_function_coverage=1 00:14:15.207 --rc genhtml_legend=1 00:14:15.207 --rc geninfo_all_blocks=1 00:14:15.207 --rc geninfo_unexecuted_blocks=1 00:14:15.207 00:14:15.207 ' 00:14:15.207 11:58:20 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:15.207 11:58:20 -- nvmf/common.sh@7 -- # uname -s 00:14:15.207 11:58:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.207 11:58:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.207 11:58:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.207 11:58:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.207 11:58:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.207 11:58:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.207 11:58:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.207 11:58:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.207 11:58:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.465 11:58:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.465 11:58:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:14:15.465 11:58:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:14:15.465 11:58:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.465 11:58:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.465 11:58:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:15.465 11:58:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:15.465 11:58:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.465 11:58:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.465 11:58:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.465 11:58:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.465 11:58:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.465 11:58:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.465 11:58:20 -- paths/export.sh@5 -- # export PATH 00:14:15.465 11:58:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.465 11:58:20 -- nvmf/common.sh@46 -- # : 0 00:14:15.465 11:58:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:15.465 11:58:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:15.465 11:58:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:15.465 11:58:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.465 11:58:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.465 11:58:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:15.465 11:58:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:15.465 11:58:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:15.465 11:58:20 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:15.465 11:58:20 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:15.465 11:58:20 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:15.465 11:58:20 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:15.465 11:58:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:15.465 11:58:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.465 11:58:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:15.465 11:58:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:15.465 11:58:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:15.465 11:58:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.465 11:58:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.465 11:58:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.465 11:58:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:15.465 11:58:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:15.465 11:58:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:15.465 11:58:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:15.465 11:58:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:15.465 11:58:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:15.465 11:58:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.465 11:58:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.465 11:58:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:15.465 11:58:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:15.465 11:58:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:15.465 11:58:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:15.465 11:58:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:15.465 11:58:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.465 11:58:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:15.465 11:58:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:15.465 11:58:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:15.465 11:58:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:15.465 11:58:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:15.465 11:58:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:15.465 Cannot find device "nvmf_tgt_br" 00:14:15.465 11:58:20 -- nvmf/common.sh@154 -- # true 00:14:15.465 11:58:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:15.465 Cannot find device "nvmf_tgt_br2" 00:14:15.465 11:58:20 -- nvmf/common.sh@155 -- # true 00:14:15.465 11:58:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:15.465 11:58:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:15.465 Cannot find device "nvmf_tgt_br" 00:14:15.465 11:58:20 -- nvmf/common.sh@157 -- # true 00:14:15.465 11:58:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:15.465 Cannot find device "nvmf_tgt_br2" 00:14:15.465 11:58:20 -- nvmf/common.sh@158 -- # true 00:14:15.465 11:58:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:15.465 11:58:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:15.465 11:58:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:15.465 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.465 11:58:20 -- nvmf/common.sh@161 -- # true 00:14:15.465 11:58:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:15.465 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.465 11:58:20 -- nvmf/common.sh@162 -- # true 00:14:15.465 11:58:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:15.465 11:58:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:15.465 11:58:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:15.465 11:58:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:15.465 11:58:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:15.465 11:58:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:15.465 11:58:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:15.465 11:58:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:15.465 11:58:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:15.465 11:58:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:15.465 11:58:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:15.465 11:58:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:15.465 11:58:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:15.465 11:58:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:15.723 11:58:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:15.723 11:58:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:15.723 11:58:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:15.723 11:58:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:15.723 11:58:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:15.723 11:58:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:15.723 11:58:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:15.723 11:58:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:15.723 11:58:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:15.723 11:58:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:15.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:14:15.723 00:14:15.723 --- 10.0.0.2 ping statistics --- 00:14:15.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.723 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:14:15.723 11:58:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:15.723 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:15.723 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:14:15.723 00:14:15.723 --- 10.0.0.3 ping statistics --- 00:14:15.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.723 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:15.723 11:58:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:15.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:15.723 00:14:15.723 --- 10.0.0.1 ping statistics --- 00:14:15.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.723 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:15.723 11:58:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.723 11:58:21 -- nvmf/common.sh@421 -- # return 0 00:14:15.723 11:58:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:15.723 11:58:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.723 11:58:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:15.723 11:58:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:15.723 11:58:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.723 11:58:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:15.723 11:58:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:15.723 11:58:21 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:15.723 11:58:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:15.723 11:58:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:15.723 11:58:21 -- common/autotest_common.sh@10 -- # set +x 00:14:15.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.723 11:58:21 -- nvmf/common.sh@469 -- # nvmfpid=74213 00:14:15.723 11:58:21 -- nvmf/common.sh@470 -- # waitforlisten 74213 00:14:15.723 11:58:21 -- common/autotest_common.sh@829 -- # '[' -z 74213 ']' 00:14:15.723 11:58:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.723 11:58:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.723 11:58:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.723 11:58:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.723 11:58:21 -- common/autotest_common.sh@10 -- # set +x 00:14:15.723 11:58:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:15.723 [2024-11-29 11:58:21.148793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:15.723 [2024-11-29 11:58:21.149164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.982 [2024-11-29 11:58:21.293557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.982 [2024-11-29 11:58:21.424073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:15.982 [2024-11-29 11:58:21.424540] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.982 [2024-11-29 11:58:21.424745] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.982 [2024-11-29 11:58:21.424950] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.982 [2024-11-29 11:58:21.425099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.914 11:58:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.914 11:58:22 -- common/autotest_common.sh@862 -- # return 0 00:14:16.914 11:58:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:16.914 11:58:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:16.914 11:58:22 -- common/autotest_common.sh@10 -- # set +x 00:14:16.914 11:58:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.914 11:58:22 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:16.914 11:58:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.914 11:58:22 -- common/autotest_common.sh@10 -- # set +x 00:14:16.914 [2024-11-29 11:58:22.186029] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.914 11:58:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.914 11:58:22 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:16.914 11:58:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.914 11:58:22 -- common/autotest_common.sh@10 -- # set +x 00:14:16.914 Malloc0 00:14:16.914 11:58:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.914 11:58:22 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:16.914 11:58:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.914 11:58:22 -- common/autotest_common.sh@10 -- # set +x 00:14:16.914 11:58:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.914 11:58:22 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:16.914 11:58:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.914 11:58:22 -- common/autotest_common.sh@10 -- # set +x 00:14:16.914 11:58:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.914 11:58:22 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.914 11:58:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.914 11:58:22 -- common/autotest_common.sh@10 -- # set +x 00:14:16.914 [2024-11-29 11:58:22.256269] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.914 11:58:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.914 11:58:22 -- target/queue_depth.sh@30 -- # bdevperf_pid=74245 00:14:16.914 11:58:22 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:16.914 11:58:22 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:16.914 11:58:22 -- target/queue_depth.sh@33 -- # waitforlisten 74245 /var/tmp/bdevperf.sock 00:14:16.914 11:58:22 -- common/autotest_common.sh@829 -- # '[' -z 74245 ']' 00:14:16.914 11:58:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:16.914 11:58:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.914 11:58:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:16.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:16.914 11:58:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.914 11:58:22 -- common/autotest_common.sh@10 -- # set +x 00:14:16.914 [2024-11-29 11:58:22.306990] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:16.914 [2024-11-29 11:58:22.307212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74245 ] 00:14:17.172 [2024-11-29 11:58:22.438964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.172 [2024-11-29 11:58:22.555411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.106 11:58:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.106 11:58:23 -- common/autotest_common.sh@862 -- # return 0 00:14:18.106 11:58:23 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:18.106 11:58:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.106 11:58:23 -- common/autotest_common.sh@10 -- # set +x 00:14:18.106 NVMe0n1 00:14:18.106 11:58:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.106 11:58:23 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:18.106 Running I/O for 10 seconds... 00:14:30.307 00:14:30.307 Latency(us) 00:14:30.307 [2024-11-29T11:58:35.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.307 [2024-11-29T11:58:35.818Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:30.307 Verification LBA range: start 0x0 length 0x4000 00:14:30.307 NVMe0n1 : 10.07 13727.42 53.62 0.00 0.00 74279.79 17158.52 62437.93 00:14:30.307 [2024-11-29T11:58:35.818Z] =================================================================================================================== 00:14:30.307 [2024-11-29T11:58:35.818Z] Total : 13727.42 53.62 0.00 0.00 74279.79 17158.52 62437.93 00:14:30.307 0 00:14:30.307 11:58:33 -- target/queue_depth.sh@39 -- # killprocess 74245 00:14:30.307 11:58:33 -- common/autotest_common.sh@936 -- # '[' -z 74245 ']' 00:14:30.307 11:58:33 -- common/autotest_common.sh@940 -- # kill -0 74245 00:14:30.307 11:58:33 -- common/autotest_common.sh@941 -- # uname 00:14:30.307 11:58:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:30.307 11:58:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74245 00:14:30.307 11:58:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:30.307 11:58:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:30.307 11:58:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74245' 00:14:30.307 killing process with pid 74245 00:14:30.307 Received shutdown signal, test time was about 10.000000 seconds 00:14:30.307 00:14:30.307 Latency(us) 00:14:30.307 [2024-11-29T11:58:35.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.307 [2024-11-29T11:58:35.818Z] =================================================================================================================== 00:14:30.307 [2024-11-29T11:58:35.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:30.307 11:58:33 -- common/autotest_common.sh@955 -- # kill 74245 00:14:30.307 11:58:33 -- common/autotest_common.sh@960 -- # wait 74245 00:14:30.307 11:58:33 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:30.307 11:58:33 -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:30.307 11:58:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:30.307 11:58:33 -- nvmf/common.sh@116 -- # sync 00:14:30.307 11:58:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:30.307 11:58:34 -- nvmf/common.sh@119 -- # set +e 00:14:30.307 11:58:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:30.307 11:58:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:30.307 rmmod nvme_tcp 00:14:30.307 rmmod nvme_fabrics 00:14:30.307 rmmod nvme_keyring 00:14:30.307 11:58:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:30.307 11:58:34 -- nvmf/common.sh@123 -- # set -e 00:14:30.307 11:58:34 -- nvmf/common.sh@124 -- # return 0 00:14:30.307 11:58:34 -- nvmf/common.sh@477 -- # '[' -n 74213 ']' 00:14:30.307 11:58:34 -- nvmf/common.sh@478 -- # killprocess 74213 00:14:30.307 11:58:34 -- common/autotest_common.sh@936 -- # '[' -z 74213 ']' 00:14:30.307 11:58:34 -- common/autotest_common.sh@940 -- # kill -0 74213 00:14:30.307 11:58:34 -- common/autotest_common.sh@941 -- # uname 00:14:30.307 11:58:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:30.307 11:58:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74213 00:14:30.307 killing process with pid 74213 00:14:30.307 11:58:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:30.307 11:58:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:30.307 11:58:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74213' 00:14:30.307 11:58:34 -- common/autotest_common.sh@955 -- # kill 74213 00:14:30.307 11:58:34 -- common/autotest_common.sh@960 -- # wait 74213 00:14:30.307 11:58:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:30.307 11:58:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:30.307 11:58:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:30.307 11:58:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.307 11:58:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:30.307 11:58:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.307 11:58:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.307 11:58:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.307 11:58:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:30.307 00:14:30.307 real 0m13.970s 00:14:30.307 user 0m24.116s 00:14:30.307 sys 0m2.178s 00:14:30.307 11:58:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:30.307 ************************************ 00:14:30.307 END TEST nvmf_queue_depth 00:14:30.307 ************************************ 00:14:30.307 11:58:34 -- common/autotest_common.sh@10 -- # set +x 00:14:30.307 11:58:34 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:30.307 11:58:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:30.307 11:58:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:30.307 11:58:34 -- common/autotest_common.sh@10 -- # set +x 00:14:30.307 ************************************ 00:14:30.307 START TEST nvmf_multipath 00:14:30.307 ************************************ 00:14:30.307 11:58:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:30.307 * Looking for test storage... 00:14:30.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:30.307 11:58:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:30.307 11:58:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:30.307 11:58:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:30.307 11:58:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:30.307 11:58:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:30.307 11:58:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:30.307 11:58:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:30.307 11:58:34 -- scripts/common.sh@335 -- # IFS=.-: 00:14:30.307 11:58:34 -- scripts/common.sh@335 -- # read -ra ver1 00:14:30.307 11:58:34 -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.307 11:58:34 -- scripts/common.sh@336 -- # read -ra ver2 00:14:30.307 11:58:34 -- scripts/common.sh@337 -- # local 'op=<' 00:14:30.307 11:58:34 -- scripts/common.sh@339 -- # ver1_l=2 00:14:30.307 11:58:34 -- scripts/common.sh@340 -- # ver2_l=1 00:14:30.307 11:58:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:30.307 11:58:34 -- scripts/common.sh@343 -- # case "$op" in 00:14:30.307 11:58:34 -- scripts/common.sh@344 -- # : 1 00:14:30.307 11:58:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:30.307 11:58:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.307 11:58:34 -- scripts/common.sh@364 -- # decimal 1 00:14:30.307 11:58:34 -- scripts/common.sh@352 -- # local d=1 00:14:30.307 11:58:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.307 11:58:34 -- scripts/common.sh@354 -- # echo 1 00:14:30.307 11:58:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:30.307 11:58:34 -- scripts/common.sh@365 -- # decimal 2 00:14:30.307 11:58:34 -- scripts/common.sh@352 -- # local d=2 00:14:30.307 11:58:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.307 11:58:34 -- scripts/common.sh@354 -- # echo 2 00:14:30.307 11:58:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:30.307 11:58:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:30.307 11:58:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:30.307 11:58:34 -- scripts/common.sh@367 -- # return 0 00:14:30.307 11:58:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.307 11:58:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:30.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.307 --rc genhtml_branch_coverage=1 00:14:30.307 --rc genhtml_function_coverage=1 00:14:30.307 --rc genhtml_legend=1 00:14:30.307 --rc geninfo_all_blocks=1 00:14:30.307 --rc geninfo_unexecuted_blocks=1 00:14:30.307 00:14:30.307 ' 00:14:30.307 11:58:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:30.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.307 --rc genhtml_branch_coverage=1 00:14:30.307 --rc genhtml_function_coverage=1 00:14:30.307 --rc genhtml_legend=1 00:14:30.307 --rc geninfo_all_blocks=1 00:14:30.307 --rc geninfo_unexecuted_blocks=1 00:14:30.307 00:14:30.307 ' 00:14:30.308 11:58:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:30.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.308 --rc genhtml_branch_coverage=1 00:14:30.308 --rc genhtml_function_coverage=1 00:14:30.308 --rc genhtml_legend=1 00:14:30.308 --rc geninfo_all_blocks=1 00:14:30.308 --rc geninfo_unexecuted_blocks=1 00:14:30.308 00:14:30.308 ' 00:14:30.308 11:58:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:30.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.308 --rc genhtml_branch_coverage=1 00:14:30.308 --rc genhtml_function_coverage=1 00:14:30.308 --rc genhtml_legend=1 00:14:30.308 --rc geninfo_all_blocks=1 00:14:30.308 --rc geninfo_unexecuted_blocks=1 00:14:30.308 00:14:30.308 ' 00:14:30.308 11:58:34 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:30.308 11:58:34 -- nvmf/common.sh@7 -- # uname -s 00:14:30.308 11:58:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.308 11:58:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.308 11:58:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.308 11:58:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.308 11:58:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.308 11:58:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.308 11:58:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.308 11:58:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.308 11:58:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.308 11:58:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.308 11:58:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:14:30.308 11:58:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:14:30.308 11:58:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.308 11:58:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.308 11:58:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:30.308 11:58:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:30.308 11:58:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.308 11:58:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.308 11:58:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.308 11:58:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.308 11:58:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.308 11:58:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.308 11:58:34 -- paths/export.sh@5 -- # export PATH 00:14:30.308 11:58:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.308 11:58:34 -- nvmf/common.sh@46 -- # : 0 00:14:30.308 11:58:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:30.308 11:58:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:30.308 11:58:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:30.308 11:58:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.308 11:58:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.308 11:58:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:30.308 11:58:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:30.308 11:58:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:30.308 11:58:34 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:30.308 11:58:34 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:30.308 11:58:34 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:30.308 11:58:34 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:30.308 11:58:34 -- target/multipath.sh@43 -- # nvmftestinit 00:14:30.308 11:58:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:30.308 11:58:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.308 11:58:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:30.308 11:58:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:30.308 11:58:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:30.308 11:58:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.308 11:58:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.308 11:58:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.308 11:58:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:30.308 11:58:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:30.308 11:58:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:30.308 11:58:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:30.308 11:58:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:30.308 11:58:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:30.308 11:58:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.308 11:58:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.308 11:58:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:30.308 11:58:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:30.308 11:58:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:30.308 11:58:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:30.308 11:58:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:30.308 11:58:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.308 11:58:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:30.308 11:58:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:30.308 11:58:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:30.308 11:58:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:30.308 11:58:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:30.308 11:58:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:30.308 Cannot find device "nvmf_tgt_br" 00:14:30.308 11:58:34 -- nvmf/common.sh@154 -- # true 00:14:30.308 11:58:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:30.308 Cannot find device "nvmf_tgt_br2" 00:14:30.308 11:58:34 -- nvmf/common.sh@155 -- # true 00:14:30.308 11:58:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:30.308 11:58:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:30.308 Cannot find device "nvmf_tgt_br" 00:14:30.308 11:58:34 -- nvmf/common.sh@157 -- # true 00:14:30.308 11:58:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:30.308 Cannot find device "nvmf_tgt_br2" 00:14:30.308 11:58:34 -- nvmf/common.sh@158 -- # true 00:14:30.308 11:58:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:30.308 11:58:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:30.308 11:58:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:30.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.308 11:58:34 -- nvmf/common.sh@161 -- # true 00:14:30.308 11:58:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:30.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.308 11:58:34 -- nvmf/common.sh@162 -- # true 00:14:30.308 11:58:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:30.308 11:58:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:30.308 11:58:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:30.308 11:58:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:30.308 11:58:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:30.308 11:58:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:30.308 11:58:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:30.308 11:58:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:30.308 11:58:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:30.308 11:58:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:30.308 11:58:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:30.308 11:58:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:30.308 11:58:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:30.308 11:58:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:30.308 11:58:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:30.308 11:58:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:30.308 11:58:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:30.308 11:58:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:30.308 11:58:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:30.308 11:58:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:30.308 11:58:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:30.308 11:58:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:30.308 11:58:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:30.308 11:58:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:30.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:14:30.308 00:14:30.308 --- 10.0.0.2 ping statistics --- 00:14:30.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.308 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:14:30.308 11:58:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:30.308 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:30.309 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:14:30.309 00:14:30.309 --- 10.0.0.3 ping statistics --- 00:14:30.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.309 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:30.309 11:58:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:30.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:30.309 00:14:30.309 --- 10.0.0.1 ping statistics --- 00:14:30.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.309 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:30.309 11:58:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.309 11:58:35 -- nvmf/common.sh@421 -- # return 0 00:14:30.309 11:58:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:30.309 11:58:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.309 11:58:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:30.309 11:58:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:30.309 11:58:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.309 11:58:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:30.309 11:58:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:30.309 11:58:35 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:14:30.309 11:58:35 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:14:30.309 11:58:35 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:14:30.309 11:58:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:30.309 11:58:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:30.309 11:58:35 -- common/autotest_common.sh@10 -- # set +x 00:14:30.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.309 11:58:35 -- nvmf/common.sh@469 -- # nvmfpid=74585 00:14:30.309 11:58:35 -- nvmf/common.sh@470 -- # waitforlisten 74585 00:14:30.309 11:58:35 -- common/autotest_common.sh@829 -- # '[' -z 74585 ']' 00:14:30.309 11:58:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.309 11:58:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:30.309 11:58:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:30.309 11:58:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.309 11:58:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:30.309 11:58:35 -- common/autotest_common.sh@10 -- # set +x 00:14:30.309 [2024-11-29 11:58:35.202822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:30.309 [2024-11-29 11:58:35.202922] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.309 [2024-11-29 11:58:35.342369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.309 [2024-11-29 11:58:35.480630] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:30.309 [2024-11-29 11:58:35.480990] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.309 [2024-11-29 11:58:35.481136] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.309 [2024-11-29 11:58:35.481294] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.309 [2024-11-29 11:58:35.481553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.309 [2024-11-29 11:58:35.481621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.309 [2024-11-29 11:58:35.481701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.309 [2024-11-29 11:58:35.481712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.876 11:58:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.876 11:58:36 -- common/autotest_common.sh@862 -- # return 0 00:14:30.876 11:58:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:30.876 11:58:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:30.876 11:58:36 -- common/autotest_common.sh@10 -- # set +x 00:14:30.876 11:58:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.876 11:58:36 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:31.133 [2024-11-29 11:58:36.486049] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.134 11:58:36 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:31.392 Malloc0 00:14:31.392 11:58:36 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:14:31.651 11:58:37 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:31.910 11:58:37 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.168 [2024-11-29 11:58:37.650075] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.168 11:58:37 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:32.427 [2024-11-29 11:58:37.922300] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:32.686 11:58:37 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:14:32.686 11:58:38 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:14:32.972 11:58:38 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:14:32.972 11:58:38 -- common/autotest_common.sh@1187 -- # local i=0 00:14:32.972 11:58:38 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.972 11:58:38 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:32.972 11:58:38 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:34.873 11:58:40 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:34.873 11:58:40 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:34.873 11:58:40 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.873 11:58:40 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:34.873 11:58:40 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.873 11:58:40 -- common/autotest_common.sh@1197 -- # return 0 00:14:34.873 11:58:40 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:14:34.873 11:58:40 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:14:34.873 11:58:40 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:14:34.873 11:58:40 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:34.873 11:58:40 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:14:34.873 11:58:40 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:14:34.873 11:58:40 -- target/multipath.sh@38 -- # return 0 00:14:34.873 11:58:40 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:14:34.873 11:58:40 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:14:34.873 11:58:40 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:14:34.873 11:58:40 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:14:34.873 11:58:40 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:14:34.873 11:58:40 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:14:34.873 11:58:40 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:14:34.873 11:58:40 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:34.873 11:58:40 -- target/multipath.sh@22 -- # local timeout=20 00:14:34.873 11:58:40 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:34.873 11:58:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:34.873 11:58:40 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:34.873 11:58:40 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:14:34.873 11:58:40 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:34.873 11:58:40 -- target/multipath.sh@22 -- # local timeout=20 00:14:34.873 11:58:40 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:34.873 11:58:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:34.873 11:58:40 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:34.873 11:58:40 -- target/multipath.sh@85 -- # echo numa 00:14:34.873 11:58:40 -- target/multipath.sh@88 -- # fio_pid=74680 00:14:34.873 11:58:40 -- target/multipath.sh@90 -- # sleep 1 00:14:34.873 11:58:40 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:34.873 [global] 00:14:34.873 thread=1 00:14:34.873 invalidate=1 00:14:34.873 rw=randrw 00:14:34.873 time_based=1 00:14:34.873 runtime=6 00:14:34.873 ioengine=libaio 00:14:34.873 direct=1 00:14:34.873 bs=4096 00:14:34.873 iodepth=128 00:14:34.873 norandommap=0 00:14:34.873 numjobs=1 00:14:34.873 00:14:34.873 verify_dump=1 00:14:34.873 verify_backlog=512 00:14:34.873 verify_state_save=0 00:14:34.873 do_verify=1 00:14:34.873 verify=crc32c-intel 00:14:34.873 [job0] 00:14:34.873 filename=/dev/nvme0n1 00:14:34.873 Could not set queue depth (nvme0n1) 00:14:35.130 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:35.130 fio-3.35 00:14:35.130 Starting 1 thread 00:14:36.063 11:58:41 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:14:36.063 11:58:41 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:36.322 11:58:41 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:14:36.322 11:58:41 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:36.322 11:58:41 -- target/multipath.sh@22 -- # local timeout=20 00:14:36.322 11:58:41 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:36.322 11:58:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:36.322 11:58:41 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:36.322 11:58:41 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:14:36.322 11:58:41 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:36.322 11:58:41 -- target/multipath.sh@22 -- # local timeout=20 00:14:36.322 11:58:41 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:36.322 11:58:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:36.322 11:58:41 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:36.322 11:58:41 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:36.889 11:58:42 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:36.889 11:58:42 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:14:36.889 11:58:42 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:36.889 11:58:42 -- target/multipath.sh@22 -- # local timeout=20 00:14:36.889 11:58:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:36.889 11:58:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:36.889 11:58:42 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:36.889 11:58:42 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:14:36.889 11:58:42 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:36.889 11:58:42 -- target/multipath.sh@22 -- # local timeout=20 00:14:36.889 11:58:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:36.889 11:58:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:36.889 11:58:42 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:36.889 11:58:42 -- target/multipath.sh@104 -- # wait 74680 00:14:41.119 00:14:41.119 job0: (groupid=0, jobs=1): err= 0: pid=74701: Fri Nov 29 11:58:46 2024 00:14:41.119 read: IOPS=10.4k, BW=40.5MiB/s (42.5MB/s)(244MiB/6007msec) 00:14:41.119 slat (usec): min=6, max=7936, avg=55.35, stdev=240.59 00:14:41.119 clat (usec): min=962, max=15861, avg=8307.21, stdev=1602.88 00:14:41.119 lat (usec): min=984, max=15909, avg=8362.57, stdev=1607.96 00:14:41.119 clat percentiles (usec): 00:14:41.119 | 1.00th=[ 4293], 5.00th=[ 5932], 10.00th=[ 6849], 20.00th=[ 7373], 00:14:41.119 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8160], 60.00th=[ 8455], 00:14:41.119 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9896], 95.00th=[12125], 00:14:41.119 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14091], 99.95th=[14222], 00:14:41.119 | 99.99th=[14484] 00:14:41.119 bw ( KiB/s): min=11808, max=27600, per=52.79%, avg=21915.55, stdev=4695.57, samples=11 00:14:41.119 iops : min= 2952, max= 6900, avg=5478.82, stdev=1173.86, samples=11 00:14:41.119 write: IOPS=6005, BW=23.5MiB/s (24.6MB/s)(131MiB/5594msec); 0 zone resets 00:14:41.119 slat (usec): min=14, max=5499, avg=65.82, stdev=164.38 00:14:41.119 clat (usec): min=1509, max=14186, avg=7259.63, stdev=1340.30 00:14:41.119 lat (usec): min=1553, max=14443, avg=7325.45, stdev=1344.36 00:14:41.119 clat percentiles (usec): 00:14:41.119 | 1.00th=[ 3392], 5.00th=[ 4424], 10.00th=[ 5407], 20.00th=[ 6652], 00:14:41.119 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7635], 00:14:41.119 | 70.00th=[ 7832], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[ 8717], 00:14:41.119 | 99.00th=[11469], 99.50th=[12125], 99.90th=[13173], 99.95th=[13435], 00:14:41.119 | 99.99th=[13829] 00:14:41.120 bw ( KiB/s): min=12496, max=27048, per=91.29%, avg=21931.73, stdev=4423.47, samples=11 00:14:41.120 iops : min= 3124, max= 6762, avg=5482.91, stdev=1105.86, samples=11 00:14:41.120 lat (usec) : 1000=0.01% 00:14:41.120 lat (msec) : 2=0.04%, 4=1.39%, 10=91.85%, 20=6.70% 00:14:41.120 cpu : usr=5.63%, sys=23.03%, ctx=5496, majf=0, minf=74 00:14:41.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:14:41.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:41.120 issued rwts: total=62345,33597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:41.120 00:14:41.120 Run status group 0 (all jobs): 00:14:41.120 READ: bw=40.5MiB/s (42.5MB/s), 40.5MiB/s-40.5MiB/s (42.5MB/s-42.5MB/s), io=244MiB (255MB), run=6007-6007msec 00:14:41.120 WRITE: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=131MiB (138MB), run=5594-5594msec 00:14:41.120 00:14:41.120 Disk stats (read/write): 00:14:41.120 nvme0n1: ios=61722/32750, merge=0/0, ticks=487436/221497, in_queue=708933, util=98.62% 00:14:41.120 11:58:46 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:14:41.686 11:58:46 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:14:41.945 11:58:47 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:14:41.945 11:58:47 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:41.945 11:58:47 -- target/multipath.sh@22 -- # local timeout=20 00:14:41.945 11:58:47 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:41.945 11:58:47 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:41.945 11:58:47 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:41.945 11:58:47 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:14:41.945 11:58:47 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:41.945 11:58:47 -- target/multipath.sh@22 -- # local timeout=20 00:14:41.945 11:58:47 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:41.945 11:58:47 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:41.945 11:58:47 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:41.945 11:58:47 -- target/multipath.sh@113 -- # echo round-robin 00:14:41.945 11:58:47 -- target/multipath.sh@116 -- # fio_pid=74783 00:14:41.945 11:58:47 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:41.945 11:58:47 -- target/multipath.sh@118 -- # sleep 1 00:14:41.945 [global] 00:14:41.945 thread=1 00:14:41.945 invalidate=1 00:14:41.945 rw=randrw 00:14:41.945 time_based=1 00:14:41.945 runtime=6 00:14:41.945 ioengine=libaio 00:14:41.945 direct=1 00:14:41.945 bs=4096 00:14:41.945 iodepth=128 00:14:41.945 norandommap=0 00:14:41.945 numjobs=1 00:14:41.945 00:14:41.945 verify_dump=1 00:14:41.945 verify_backlog=512 00:14:41.945 verify_state_save=0 00:14:41.945 do_verify=1 00:14:41.945 verify=crc32c-intel 00:14:41.945 [job0] 00:14:41.945 filename=/dev/nvme0n1 00:14:41.945 Could not set queue depth (nvme0n1) 00:14:41.945 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:41.945 fio-3.35 00:14:41.945 Starting 1 thread 00:14:42.878 11:58:48 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:14:43.137 11:58:48 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:43.704 11:58:48 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:14:43.704 11:58:48 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:43.704 11:58:48 -- target/multipath.sh@22 -- # local timeout=20 00:14:43.704 11:58:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:43.704 11:58:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:43.704 11:58:48 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:43.704 11:58:48 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:14:43.704 11:58:48 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:43.704 11:58:48 -- target/multipath.sh@22 -- # local timeout=20 00:14:43.704 11:58:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:43.704 11:58:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:43.704 11:58:48 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:43.704 11:58:48 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:43.704 11:58:49 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:43.963 11:58:49 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:14:43.963 11:58:49 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:43.963 11:58:49 -- target/multipath.sh@22 -- # local timeout=20 00:14:43.963 11:58:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:43.963 11:58:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:43.963 11:58:49 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:43.963 11:58:49 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:14:43.963 11:58:49 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:43.963 11:58:49 -- target/multipath.sh@22 -- # local timeout=20 00:14:43.963 11:58:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:43.963 11:58:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:43.963 11:58:49 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:43.963 11:58:49 -- target/multipath.sh@132 -- # wait 74783 00:14:48.149 00:14:48.149 job0: (groupid=0, jobs=1): err= 0: pid=74804: Fri Nov 29 11:58:53 2024 00:14:48.149 read: IOPS=10.3k, BW=40.1MiB/s (42.1MB/s)(241MiB/6002msec) 00:14:48.149 slat (usec): min=5, max=8533, avg=46.79, stdev=213.97 00:14:48.149 clat (usec): min=662, max=17766, avg=8476.78, stdev=1831.12 00:14:48.149 lat (usec): min=676, max=17776, avg=8523.58, stdev=1837.29 00:14:48.149 clat percentiles (usec): 00:14:48.149 | 1.00th=[ 3851], 5.00th=[ 5473], 10.00th=[ 6390], 20.00th=[ 7373], 00:14:48.150 | 30.00th=[ 7832], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8717], 00:14:48.150 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[10552], 95.00th=[12256], 00:14:48.150 | 99.00th=[13698], 99.50th=[13960], 99.90th=[15270], 99.95th=[16057], 00:14:48.150 | 99.99th=[17695] 00:14:48.150 bw ( KiB/s): min=11872, max=28640, per=53.27%, avg=21879.27, stdev=5006.20, samples=11 00:14:48.150 iops : min= 2968, max= 7160, avg=5469.82, stdev=1251.55, samples=11 00:14:48.150 write: IOPS=6103, BW=23.8MiB/s (25.0MB/s)(130MiB/5451msec); 0 zone resets 00:14:48.150 slat (usec): min=13, max=2388, avg=59.12, stdev=145.92 00:14:48.150 clat (usec): min=998, max=15691, avg=7208.21, stdev=1605.39 00:14:48.150 lat (usec): min=1023, max=15715, avg=7267.34, stdev=1616.01 00:14:48.150 clat percentiles (usec): 00:14:48.150 | 1.00th=[ 3228], 5.00th=[ 4113], 10.00th=[ 4752], 20.00th=[ 5866], 00:14:48.150 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7767], 00:14:48.150 | 70.00th=[ 8029], 80.00th=[ 8291], 90.00th=[ 8717], 95.00th=[ 9110], 00:14:48.150 | 99.00th=[11469], 99.50th=[12256], 99.90th=[13304], 99.95th=[13698], 00:14:48.150 | 99.99th=[14746] 00:14:48.150 bw ( KiB/s): min=12352, max=29664, per=89.66%, avg=21890.91, stdev=4748.15, samples=11 00:14:48.150 iops : min= 3088, max= 7416, avg=5472.73, stdev=1187.04, samples=11 00:14:48.150 lat (usec) : 750=0.01%, 1000=0.02% 00:14:48.150 lat (msec) : 2=0.11%, 4=2.11%, 10=88.51%, 20=9.25% 00:14:48.150 cpu : usr=6.01%, sys=22.86%, ctx=5323, majf=0, minf=90 00:14:48.150 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:14:48.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:48.150 issued rwts: total=61624,33271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:48.150 00:14:48.150 Run status group 0 (all jobs): 00:14:48.150 READ: bw=40.1MiB/s (42.1MB/s), 40.1MiB/s-40.1MiB/s (42.1MB/s-42.1MB/s), io=241MiB (252MB), run=6002-6002msec 00:14:48.150 WRITE: bw=23.8MiB/s (25.0MB/s), 23.8MiB/s-23.8MiB/s (25.0MB/s-25.0MB/s), io=130MiB (136MB), run=5451-5451msec 00:14:48.150 00:14:48.150 Disk stats (read/write): 00:14:48.150 nvme0n1: ios=60946/32571, merge=0/0, ticks=492830/218842, in_queue=711672, util=98.70% 00:14:48.150 11:58:53 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:48.150 11:58:53 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:48.150 11:58:53 -- common/autotest_common.sh@1208 -- # local i=0 00:14:48.150 11:58:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:48.150 11:58:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.408 11:58:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:48.408 11:58:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.408 11:58:53 -- common/autotest_common.sh@1220 -- # return 0 00:14:48.408 11:58:53 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.676 11:58:53 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:14:48.676 11:58:53 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:14:48.676 11:58:53 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:14:48.676 11:58:53 -- target/multipath.sh@144 -- # nvmftestfini 00:14:48.676 11:58:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:48.676 11:58:53 -- nvmf/common.sh@116 -- # sync 00:14:48.676 11:58:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:48.676 11:58:54 -- nvmf/common.sh@119 -- # set +e 00:14:48.676 11:58:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:48.676 11:58:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:48.676 rmmod nvme_tcp 00:14:48.676 rmmod nvme_fabrics 00:14:48.676 rmmod nvme_keyring 00:14:48.676 11:58:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:48.676 11:58:54 -- nvmf/common.sh@123 -- # set -e 00:14:48.676 11:58:54 -- nvmf/common.sh@124 -- # return 0 00:14:48.676 11:58:54 -- nvmf/common.sh@477 -- # '[' -n 74585 ']' 00:14:48.676 11:58:54 -- nvmf/common.sh@478 -- # killprocess 74585 00:14:48.676 11:58:54 -- common/autotest_common.sh@936 -- # '[' -z 74585 ']' 00:14:48.676 11:58:54 -- common/autotest_common.sh@940 -- # kill -0 74585 00:14:48.676 11:58:54 -- common/autotest_common.sh@941 -- # uname 00:14:48.676 11:58:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:48.676 11:58:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74585 00:14:48.676 11:58:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:48.676 11:58:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:48.676 11:58:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74585' 00:14:48.676 killing process with pid 74585 00:14:48.676 11:58:54 -- common/autotest_common.sh@955 -- # kill 74585 00:14:48.676 11:58:54 -- common/autotest_common.sh@960 -- # wait 74585 00:14:49.242 11:58:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:49.242 11:58:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:49.242 11:58:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:49.242 11:58:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.242 11:58:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:49.242 11:58:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.242 11:58:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.242 11:58:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.242 11:58:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:49.242 ************************************ 00:14:49.242 END TEST nvmf_multipath 00:14:49.242 ************************************ 00:14:49.242 00:14:49.242 real 0m19.956s 00:14:49.242 user 1m15.660s 00:14:49.242 sys 0m8.952s 00:14:49.242 11:58:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:49.242 11:58:54 -- common/autotest_common.sh@10 -- # set +x 00:14:49.242 11:58:54 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:49.242 11:58:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:49.242 11:58:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:49.242 11:58:54 -- common/autotest_common.sh@10 -- # set +x 00:14:49.242 ************************************ 00:14:49.242 START TEST nvmf_zcopy 00:14:49.242 ************************************ 00:14:49.242 11:58:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:49.242 * Looking for test storage... 00:14:49.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:49.243 11:58:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:49.243 11:58:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:49.243 11:58:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:49.501 11:58:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:49.501 11:58:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:49.501 11:58:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:49.501 11:58:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:49.501 11:58:54 -- scripts/common.sh@335 -- # IFS=.-: 00:14:49.501 11:58:54 -- scripts/common.sh@335 -- # read -ra ver1 00:14:49.501 11:58:54 -- scripts/common.sh@336 -- # IFS=.-: 00:14:49.501 11:58:54 -- scripts/common.sh@336 -- # read -ra ver2 00:14:49.501 11:58:54 -- scripts/common.sh@337 -- # local 'op=<' 00:14:49.501 11:58:54 -- scripts/common.sh@339 -- # ver1_l=2 00:14:49.501 11:58:54 -- scripts/common.sh@340 -- # ver2_l=1 00:14:49.501 11:58:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:49.501 11:58:54 -- scripts/common.sh@343 -- # case "$op" in 00:14:49.501 11:58:54 -- scripts/common.sh@344 -- # : 1 00:14:49.501 11:58:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:49.501 11:58:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.501 11:58:54 -- scripts/common.sh@364 -- # decimal 1 00:14:49.501 11:58:54 -- scripts/common.sh@352 -- # local d=1 00:14:49.501 11:58:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:49.501 11:58:54 -- scripts/common.sh@354 -- # echo 1 00:14:49.501 11:58:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:49.501 11:58:54 -- scripts/common.sh@365 -- # decimal 2 00:14:49.501 11:58:54 -- scripts/common.sh@352 -- # local d=2 00:14:49.501 11:58:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:49.501 11:58:54 -- scripts/common.sh@354 -- # echo 2 00:14:49.501 11:58:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:49.501 11:58:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:49.501 11:58:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:49.501 11:58:54 -- scripts/common.sh@367 -- # return 0 00:14:49.501 11:58:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:49.501 11:58:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:49.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.501 --rc genhtml_branch_coverage=1 00:14:49.501 --rc genhtml_function_coverage=1 00:14:49.501 --rc genhtml_legend=1 00:14:49.501 --rc geninfo_all_blocks=1 00:14:49.501 --rc geninfo_unexecuted_blocks=1 00:14:49.501 00:14:49.501 ' 00:14:49.501 11:58:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:49.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.501 --rc genhtml_branch_coverage=1 00:14:49.501 --rc genhtml_function_coverage=1 00:14:49.501 --rc genhtml_legend=1 00:14:49.501 --rc geninfo_all_blocks=1 00:14:49.501 --rc geninfo_unexecuted_blocks=1 00:14:49.501 00:14:49.501 ' 00:14:49.501 11:58:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:49.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.501 --rc genhtml_branch_coverage=1 00:14:49.501 --rc genhtml_function_coverage=1 00:14:49.501 --rc genhtml_legend=1 00:14:49.501 --rc geninfo_all_blocks=1 00:14:49.501 --rc geninfo_unexecuted_blocks=1 00:14:49.501 00:14:49.501 ' 00:14:49.501 11:58:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:49.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.501 --rc genhtml_branch_coverage=1 00:14:49.501 --rc genhtml_function_coverage=1 00:14:49.501 --rc genhtml_legend=1 00:14:49.502 --rc geninfo_all_blocks=1 00:14:49.502 --rc geninfo_unexecuted_blocks=1 00:14:49.502 00:14:49.502 ' 00:14:49.502 11:58:54 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:49.502 11:58:54 -- nvmf/common.sh@7 -- # uname -s 00:14:49.502 11:58:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.502 11:58:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.502 11:58:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.502 11:58:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.502 11:58:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.502 11:58:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.502 11:58:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.502 11:58:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.502 11:58:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.502 11:58:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.502 11:58:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:14:49.502 11:58:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:14:49.502 11:58:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.502 11:58:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.502 11:58:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:49.502 11:58:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:49.502 11:58:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.502 11:58:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.502 11:58:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.502 11:58:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.502 11:58:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.502 11:58:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.502 11:58:54 -- paths/export.sh@5 -- # export PATH 00:14:49.502 11:58:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.502 11:58:54 -- nvmf/common.sh@46 -- # : 0 00:14:49.502 11:58:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:49.502 11:58:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:49.502 11:58:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:49.502 11:58:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.502 11:58:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.502 11:58:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:49.502 11:58:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:49.502 11:58:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:49.502 11:58:54 -- target/zcopy.sh@12 -- # nvmftestinit 00:14:49.502 11:58:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:49.502 11:58:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.502 11:58:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:49.502 11:58:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:49.502 11:58:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:49.502 11:58:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.502 11:58:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.502 11:58:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.502 11:58:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:49.502 11:58:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:49.502 11:58:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:49.502 11:58:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:49.502 11:58:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:49.502 11:58:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:49.502 11:58:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.502 11:58:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.502 11:58:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:49.502 11:58:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:49.502 11:58:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:49.502 11:58:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:49.502 11:58:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:49.502 11:58:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.502 11:58:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:49.502 11:58:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:49.502 11:58:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:49.502 11:58:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:49.502 11:58:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:49.502 11:58:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:49.502 Cannot find device "nvmf_tgt_br" 00:14:49.502 11:58:54 -- nvmf/common.sh@154 -- # true 00:14:49.502 11:58:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:49.502 Cannot find device "nvmf_tgt_br2" 00:14:49.502 11:58:54 -- nvmf/common.sh@155 -- # true 00:14:49.502 11:58:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:49.502 11:58:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:49.502 Cannot find device "nvmf_tgt_br" 00:14:49.502 11:58:54 -- nvmf/common.sh@157 -- # true 00:14:49.502 11:58:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:49.502 Cannot find device "nvmf_tgt_br2" 00:14:49.502 11:58:54 -- nvmf/common.sh@158 -- # true 00:14:49.502 11:58:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:49.502 11:58:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:49.502 11:58:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:49.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.502 11:58:54 -- nvmf/common.sh@161 -- # true 00:14:49.502 11:58:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:49.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.502 11:58:54 -- nvmf/common.sh@162 -- # true 00:14:49.502 11:58:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:49.502 11:58:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:49.502 11:58:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:49.502 11:58:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:49.502 11:58:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:49.502 11:58:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:49.502 11:58:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:49.502 11:58:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:49.502 11:58:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:49.502 11:58:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:49.760 11:58:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:49.760 11:58:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:49.760 11:58:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:49.760 11:58:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:49.760 11:58:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:49.760 11:58:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:49.760 11:58:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:49.760 11:58:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:49.761 11:58:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:49.761 11:58:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:49.761 11:58:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:49.761 11:58:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:49.761 11:58:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:49.761 11:58:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:49.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:14:49.761 00:14:49.761 --- 10.0.0.2 ping statistics --- 00:14:49.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.761 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:14:49.761 11:58:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:49.761 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:49.761 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:14:49.761 00:14:49.761 --- 10.0.0.3 ping statistics --- 00:14:49.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.761 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:49.761 11:58:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:49.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:49.761 00:14:49.761 --- 10.0.0.1 ping statistics --- 00:14:49.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.761 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:49.761 11:58:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.761 11:58:55 -- nvmf/common.sh@421 -- # return 0 00:14:49.761 11:58:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:49.761 11:58:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.761 11:58:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:49.761 11:58:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:49.761 11:58:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.761 11:58:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:49.761 11:58:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:49.761 11:58:55 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:49.761 11:58:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:49.761 11:58:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:49.761 11:58:55 -- common/autotest_common.sh@10 -- # set +x 00:14:49.761 11:58:55 -- nvmf/common.sh@469 -- # nvmfpid=75055 00:14:49.761 11:58:55 -- nvmf/common.sh@470 -- # waitforlisten 75055 00:14:49.761 11:58:55 -- common/autotest_common.sh@829 -- # '[' -z 75055 ']' 00:14:49.761 11:58:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:49.761 11:58:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.761 11:58:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.761 11:58:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.761 11:58:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.761 11:58:55 -- common/autotest_common.sh@10 -- # set +x 00:14:49.761 [2024-11-29 11:58:55.201049] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:49.761 [2024-11-29 11:58:55.201187] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.019 [2024-11-29 11:58:55.345283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.019 [2024-11-29 11:58:55.444475] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:50.019 [2024-11-29 11:58:55.444684] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.019 [2024-11-29 11:58:55.444701] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.019 [2024-11-29 11:58:55.444713] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.019 [2024-11-29 11:58:55.444744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.955 11:58:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.955 11:58:56 -- common/autotest_common.sh@862 -- # return 0 00:14:50.955 11:58:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:50.955 11:58:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:50.955 11:58:56 -- common/autotest_common.sh@10 -- # set +x 00:14:50.955 11:58:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.955 11:58:56 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:50.955 11:58:56 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:50.955 11:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.955 11:58:56 -- common/autotest_common.sh@10 -- # set +x 00:14:50.955 [2024-11-29 11:58:56.297415] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.955 11:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.955 11:58:56 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:50.955 11:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.955 11:58:56 -- common/autotest_common.sh@10 -- # set +x 00:14:50.955 11:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.955 11:58:56 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.955 11:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.955 11:58:56 -- common/autotest_common.sh@10 -- # set +x 00:14:50.955 [2024-11-29 11:58:56.313720] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.955 11:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.955 11:58:56 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:50.955 11:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.955 11:58:56 -- common/autotest_common.sh@10 -- # set +x 00:14:50.955 11:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.955 11:58:56 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:50.955 11:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.955 11:58:56 -- common/autotest_common.sh@10 -- # set +x 00:14:50.955 malloc0 00:14:50.955 11:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.955 11:58:56 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:50.955 11:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.955 11:58:56 -- common/autotest_common.sh@10 -- # set +x 00:14:50.955 11:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.955 11:58:56 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:50.955 11:58:56 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:50.955 11:58:56 -- nvmf/common.sh@520 -- # config=() 00:14:50.955 11:58:56 -- nvmf/common.sh@520 -- # local subsystem config 00:14:50.955 11:58:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:50.955 11:58:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:50.955 { 00:14:50.955 "params": { 00:14:50.955 "name": "Nvme$subsystem", 00:14:50.955 "trtype": "$TEST_TRANSPORT", 00:14:50.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:50.955 "adrfam": "ipv4", 00:14:50.955 "trsvcid": "$NVMF_PORT", 00:14:50.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:50.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:50.955 "hdgst": ${hdgst:-false}, 00:14:50.955 "ddgst": ${ddgst:-false} 00:14:50.955 }, 00:14:50.955 "method": "bdev_nvme_attach_controller" 00:14:50.955 } 00:14:50.955 EOF 00:14:50.955 )") 00:14:50.955 11:58:56 -- nvmf/common.sh@542 -- # cat 00:14:50.955 11:58:56 -- nvmf/common.sh@544 -- # jq . 00:14:50.955 11:58:56 -- nvmf/common.sh@545 -- # IFS=, 00:14:50.955 11:58:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:50.955 "params": { 00:14:50.955 "name": "Nvme1", 00:14:50.955 "trtype": "tcp", 00:14:50.955 "traddr": "10.0.0.2", 00:14:50.955 "adrfam": "ipv4", 00:14:50.955 "trsvcid": "4420", 00:14:50.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:50.955 "hdgst": false, 00:14:50.955 "ddgst": false 00:14:50.955 }, 00:14:50.955 "method": "bdev_nvme_attach_controller" 00:14:50.955 }' 00:14:50.955 [2024-11-29 11:58:56.424859] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:50.955 [2024-11-29 11:58:56.425007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75088 ] 00:14:51.214 [2024-11-29 11:58:56.565665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.214 [2024-11-29 11:58:56.700058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.474 Running I/O for 10 seconds... 00:15:01.448 00:15:01.448 Latency(us) 00:15:01.448 [2024-11-29T11:59:06.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.448 [2024-11-29T11:59:06.959Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:01.448 Verification LBA range: start 0x0 length 0x1000 00:15:01.448 Nvme1n1 : 10.01 8959.13 69.99 0.00 0.00 14249.84 1489.45 20852.36 00:15:01.448 [2024-11-29T11:59:06.959Z] =================================================================================================================== 00:15:01.448 [2024-11-29T11:59:06.959Z] Total : 8959.13 69.99 0.00 0.00 14249.84 1489.45 20852.36 00:15:01.706 11:59:07 -- target/zcopy.sh@39 -- # perfpid=75211 00:15:01.706 11:59:07 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:01.706 11:59:07 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:01.706 11:59:07 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:01.706 11:59:07 -- common/autotest_common.sh@10 -- # set +x 00:15:01.706 11:59:07 -- nvmf/common.sh@520 -- # config=() 00:15:01.706 11:59:07 -- nvmf/common.sh@520 -- # local subsystem config 00:15:01.706 11:59:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:01.706 11:59:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:01.706 { 00:15:01.706 "params": { 00:15:01.706 "name": "Nvme$subsystem", 00:15:01.706 "trtype": "$TEST_TRANSPORT", 00:15:01.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:01.706 "adrfam": "ipv4", 00:15:01.706 "trsvcid": "$NVMF_PORT", 00:15:01.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:01.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:01.706 "hdgst": ${hdgst:-false}, 00:15:01.706 "ddgst": ${ddgst:-false} 00:15:01.706 }, 00:15:01.706 "method": "bdev_nvme_attach_controller" 00:15:01.706 } 00:15:01.706 EOF 00:15:01.706 )") 00:15:01.706 11:59:07 -- nvmf/common.sh@542 -- # cat 00:15:01.706 [2024-11-29 11:59:07.140137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.707 [2024-11-29 11:59:07.140184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.707 11:59:07 -- nvmf/common.sh@544 -- # jq . 00:15:01.707 11:59:07 -- nvmf/common.sh@545 -- # IFS=, 00:15:01.707 11:59:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:01.707 "params": { 00:15:01.707 "name": "Nvme1", 00:15:01.707 "trtype": "tcp", 00:15:01.707 "traddr": "10.0.0.2", 00:15:01.707 "adrfam": "ipv4", 00:15:01.707 "trsvcid": "4420", 00:15:01.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.707 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:01.707 "hdgst": false, 00:15:01.707 "ddgst": false 00:15:01.707 }, 00:15:01.707 "method": "bdev_nvme_attach_controller" 00:15:01.707 }' 00:15:01.707 [2024-11-29 11:59:07.152080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.707 [2024-11-29 11:59:07.152664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.707 [2024-11-29 11:59:07.160083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.707 [2024-11-29 11:59:07.160239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.707 [2024-11-29 11:59:07.168085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.707 [2024-11-29 11:59:07.168116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.707 [2024-11-29 11:59:07.180086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.707 [2024-11-29 11:59:07.180115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.707 [2024-11-29 11:59:07.186934] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:01.707 [2024-11-29 11:59:07.187027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75211 ] 00:15:01.707 [2024-11-29 11:59:07.192099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.707 [2024-11-29 11:59:07.192127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.707 [2024-11-29 11:59:07.200100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.707 [2024-11-29 11:59:07.200130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.707 [2024-11-29 11:59:07.212103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.707 [2024-11-29 11:59:07.212131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.965 [2024-11-29 11:59:07.224134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.965 [2024-11-29 11:59:07.224164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.965 [2024-11-29 11:59:07.236104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.965 [2024-11-29 11:59:07.236130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.965 [2024-11-29 11:59:07.248118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.965 [2024-11-29 11:59:07.248303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.965 [2024-11-29 11:59:07.260113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.965 [2024-11-29 11:59:07.260142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.965 [2024-11-29 11:59:07.272115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.965 [2024-11-29 11:59:07.272142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.965 [2024-11-29 11:59:07.284118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.965 [2024-11-29 11:59:07.284146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.965 [2024-11-29 11:59:07.296122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.965 [2024-11-29 11:59:07.296291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.965 [2024-11-29 11:59:07.308133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.965 [2024-11-29 11:59:07.308282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.965 [2024-11-29 11:59:07.320163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.965 [2024-11-29 11:59:07.320301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.965 [2024-11-29 11:59:07.321498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.965 [2024-11-29 11:59:07.332184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.965 [2024-11-29 11:59:07.332382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.965 [2024-11-29 11:59:07.344170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.965 [2024-11-29 11:59:07.344339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.965 [2024-11-29 11:59:07.356158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.965 [2024-11-29 11:59:07.356311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.965 [2024-11-29 11:59:07.368164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.965 [2024-11-29 11:59:07.368316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.965 [2024-11-29 11:59:07.380190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.966 [2024-11-29 11:59:07.380396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.966 [2024-11-29 11:59:07.392177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.966 [2024-11-29 11:59:07.392334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.966 [2024-11-29 11:59:07.404179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.966 [2024-11-29 11:59:07.404330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.966 [2024-11-29 11:59:07.412919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.966 [2024-11-29 11:59:07.416195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.966 [2024-11-29 11:59:07.416328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.966 [2024-11-29 11:59:07.428186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.966 [2024-11-29 11:59:07.428331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.966 [2024-11-29 11:59:07.436201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.966 [2024-11-29 11:59:07.436384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.966 [2024-11-29 11:59:07.448223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.966 [2024-11-29 11:59:07.448439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.966 [2024-11-29 11:59:07.460214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.966 [2024-11-29 11:59:07.460495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.966 [2024-11-29 11:59:07.472215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.966 [2024-11-29 11:59:07.472394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.224 [2024-11-29 11:59:07.484216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.224 [2024-11-29 11:59:07.484394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.224 [2024-11-29 11:59:07.496224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.224 [2024-11-29 11:59:07.496395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.224 [2024-11-29 11:59:07.508219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.224 [2024-11-29 11:59:07.508348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.224 [2024-11-29 11:59:07.520245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.224 [2024-11-29 11:59:07.520408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.224 [2024-11-29 11:59:07.532247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.224 [2024-11-29 11:59:07.532385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.224 [2024-11-29 11:59:07.544254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.224 [2024-11-29 11:59:07.544400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.224 [2024-11-29 11:59:07.556265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.224 [2024-11-29 11:59:07.556405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.224 [2024-11-29 11:59:07.568278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.224 [2024-11-29 11:59:07.568419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.224 [2024-11-29 11:59:07.580287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.225 [2024-11-29 11:59:07.580442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.225 Running I/O for 5 seconds... 00:15:02.225 [2024-11-29 11:59:07.592292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.225 [2024-11-29 11:59:07.592432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.225 [2024-11-29 11:59:07.609603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.225 [2024-11-29 11:59:07.609752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.225 [2024-11-29 11:59:07.625969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.225 [2024-11-29 11:59:07.626123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.225 [2024-11-29 11:59:07.642892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.225 [2024-11-29 11:59:07.643079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.225 [2024-11-29 11:59:07.660062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.225 [2024-11-29 11:59:07.660216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.225 [2024-11-29 11:59:07.670355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.225 [2024-11-29 11:59:07.670541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.225 [2024-11-29 11:59:07.686197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.225 [2024-11-29 11:59:07.686356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.225 [2024-11-29 11:59:07.703531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.225 [2024-11-29 11:59:07.703703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.225 [2024-11-29 11:59:07.718534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.225 [2024-11-29 11:59:07.718697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.483 [2024-11-29 11:59:07.734363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.483 [2024-11-29 11:59:07.734519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.483 [2024-11-29 11:59:07.750855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.483 [2024-11-29 11:59:07.751027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.483 [2024-11-29 11:59:07.769274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.483 [2024-11-29 11:59:07.769490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.483 [2024-11-29 11:59:07.783932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.483 [2024-11-29 11:59:07.784111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.483 [2024-11-29 11:59:07.808200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.483 [2024-11-29 11:59:07.808459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.483 [2024-11-29 11:59:07.822032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.483 [2024-11-29 11:59:07.822334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.483 [2024-11-29 11:59:07.839837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.483 [2024-11-29 11:59:07.840264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.483 [2024-11-29 11:59:07.857365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.483 [2024-11-29 11:59:07.857614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.483 [2024-11-29 11:59:07.874224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.483 [2024-11-29 11:59:07.874272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.483 [2024-11-29 11:59:07.890060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.483 [2024-11-29 11:59:07.890107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.483 [2024-11-29 11:59:07.901100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.484 [2024-11-29 11:59:07.901146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.484 [2024-11-29 11:59:07.918045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.484 [2024-11-29 11:59:07.918096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.484 [2024-11-29 11:59:07.934400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.484 [2024-11-29 11:59:07.934446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.484 [2024-11-29 11:59:07.950561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.484 [2024-11-29 11:59:07.950635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.484 [2024-11-29 11:59:07.965899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.484 [2024-11-29 11:59:07.965995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.484 [2024-11-29 11:59:07.981303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.484 [2024-11-29 11:59:07.981361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.484 [2024-11-29 11:59:07.990585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.484 [2024-11-29 11:59:07.990622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.742 [2024-11-29 11:59:08.006901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.742 [2024-11-29 11:59:08.006939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.742 [2024-11-29 11:59:08.024211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.742 [2024-11-29 11:59:08.024396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.742 [2024-11-29 11:59:08.039718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.742 [2024-11-29 11:59:08.039757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.742 [2024-11-29 11:59:08.057489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.742 [2024-11-29 11:59:08.057563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.742 [2024-11-29 11:59:08.072999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.742 [2024-11-29 11:59:08.073045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.742 [2024-11-29 11:59:08.090383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.742 [2024-11-29 11:59:08.090421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.742 [2024-11-29 11:59:08.105716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.742 [2024-11-29 11:59:08.105772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.742 [2024-11-29 11:59:08.121620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.742 [2024-11-29 11:59:08.121659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.742 [2024-11-29 11:59:08.138086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.742 [2024-11-29 11:59:08.138124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.742 [2024-11-29 11:59:08.154872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.742 [2024-11-29 11:59:08.155064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.742 [2024-11-29 11:59:08.171889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.742 [2024-11-29 11:59:08.171936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.742 [2024-11-29 11:59:08.181694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.742 [2024-11-29 11:59:08.181730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.742 [2024-11-29 11:59:08.196463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.743 [2024-11-29 11:59:08.196502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.743 [2024-11-29 11:59:08.213741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.743 [2024-11-29 11:59:08.213814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.743 [2024-11-29 11:59:08.230981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.743 [2024-11-29 11:59:08.231028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.743 [2024-11-29 11:59:08.248146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:02.743 [2024-11-29 11:59:08.248186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.001 [2024-11-29 11:59:08.264762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.001 [2024-11-29 11:59:08.264799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.001 [2024-11-29 11:59:08.281217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.001 [2024-11-29 11:59:08.281255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.001 [2024-11-29 11:59:08.298153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.001 [2024-11-29 11:59:08.298188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.001 [2024-11-29 11:59:08.316515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.001 [2024-11-29 11:59:08.316815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.001 [2024-11-29 11:59:08.332759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.001 [2024-11-29 11:59:08.332806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.002 [2024-11-29 11:59:08.344345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.002 [2024-11-29 11:59:08.344409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.002 [2024-11-29 11:59:08.362356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.002 [2024-11-29 11:59:08.362419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.002 [2024-11-29 11:59:08.377518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.002 [2024-11-29 11:59:08.377597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.002 [2024-11-29 11:59:08.393067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.002 [2024-11-29 11:59:08.393133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.002 [2024-11-29 11:59:08.411076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.002 [2024-11-29 11:59:08.411108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.002 [2024-11-29 11:59:08.426558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.002 [2024-11-29 11:59:08.426635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.002 [2024-11-29 11:59:08.442643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.002 [2024-11-29 11:59:08.442677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.002 [2024-11-29 11:59:08.461128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.002 [2024-11-29 11:59:08.461163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.002 [2024-11-29 11:59:08.475190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.002 [2024-11-29 11:59:08.475229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.002 [2024-11-29 11:59:08.490823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.002 [2024-11-29 11:59:08.490860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.002 [2024-11-29 11:59:08.500292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.002 [2024-11-29 11:59:08.500326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.259 [2024-11-29 11:59:08.515886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.259 [2024-11-29 11:59:08.515936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.259 [2024-11-29 11:59:08.531789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.259 [2024-11-29 11:59:08.531861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.259 [2024-11-29 11:59:08.548841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.259 [2024-11-29 11:59:08.548892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.259 [2024-11-29 11:59:08.565002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.260 [2024-11-29 11:59:08.565040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.260 [2024-11-29 11:59:08.582992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.260 [2024-11-29 11:59:08.583030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.260 [2024-11-29 11:59:08.598507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.260 [2024-11-29 11:59:08.598747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.260 [2024-11-29 11:59:08.614932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.260 [2024-11-29 11:59:08.614987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.260 [2024-11-29 11:59:08.632302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.260 [2024-11-29 11:59:08.632340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.260 [2024-11-29 11:59:08.647499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.260 [2024-11-29 11:59:08.647546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.260 [2024-11-29 11:59:08.663732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.260 [2024-11-29 11:59:08.663774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.260 [2024-11-29 11:59:08.679594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.260 [2024-11-29 11:59:08.679632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.260 [2024-11-29 11:59:08.697716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.260 [2024-11-29 11:59:08.697751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.260 [2024-11-29 11:59:08.712333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.260 [2024-11-29 11:59:08.712381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.260 [2024-11-29 11:59:08.727658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.260 [2024-11-29 11:59:08.727691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.260 [2024-11-29 11:59:08.744934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.260 [2024-11-29 11:59:08.744969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.260 [2024-11-29 11:59:08.761327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.260 [2024-11-29 11:59:08.761367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:08.778127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:08.778163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:08.794854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:08.794907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:08.812805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:08.812843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:08.827927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:08.828131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:08.838674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:08.838710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:08.853886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:08.854072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:08.869468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:08.869671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:08.885556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:08.885652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:08.904342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:08.904652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:08.919399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:08.919436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:08.930336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:08.930375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:08.945241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:08.945278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:08.961891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:08.961926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:08.978441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:08.978476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:08.994481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:08.994546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.517 [2024-11-29 11:59:09.013275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.517 [2024-11-29 11:59:09.013321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.028045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.028260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.043988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.044024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.054378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.054667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.069612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.069684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.088306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.088352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.104506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.104584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.119587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.119659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.135173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.135220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.146773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.146820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.161764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.161800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.177774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.177809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.187042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.187096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.203547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.203615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.219640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.219675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.228898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.229137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.245014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.245050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.263711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.263752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.775 [2024-11-29 11:59:09.279059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:03.775 [2024-11-29 11:59:09.279270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.297097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.297149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.312718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.312764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.330656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.330706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.345067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.345103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.362247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.362458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.377807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.377849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.388226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.388275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.404047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.404132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.420241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.420279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.438344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.438390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.452290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.452334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.470322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.470358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.484488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.484567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.501040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.501295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.517741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.517794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.034 [2024-11-29 11:59:09.529147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.034 [2024-11-29 11:59:09.529191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.545304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.545383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.562925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.562995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.576031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.576294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.593007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.593053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.608444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.608496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.623899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.623941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.633885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.633951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.650077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.650111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.667824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.667872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.684469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.684600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.700768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.700817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.718727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.718772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.735969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.736224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.752359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.752399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.768455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.768489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.293 [2024-11-29 11:59:09.788115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.293 [2024-11-29 11:59:09.788166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:09.803319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:09.803365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:09.819986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:09.820021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:09.838270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:09.838475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:09.854169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:09.854341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:09.872257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:09.872428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:09.888782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:09.888988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:09.903684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:09.903887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:09.919835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:09.920037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:09.937802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:09.937991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:09.953656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:09.953841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:09.970512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:09.970744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:09.988165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:09.988364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:10.004045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:10.004248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:10.020322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:10.020558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:10.037102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:10.037290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.551 [2024-11-29 11:59:10.054150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.551 [2024-11-29 11:59:10.054370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.069967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.070158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.088202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.088385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.102650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.102683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.117354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.117389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.133647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.133682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.150208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.150248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.168812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.168994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.188930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.188968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.198812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.198855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.213258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.213301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.229629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.229665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.246910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.246958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.263724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.263761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.279466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.279760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.290613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.290656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.809 [2024-11-29 11:59:10.305377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.809 [2024-11-29 11:59:10.305436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.323699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.323736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.341134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.341170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.357288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.357324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.373566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.373634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.394285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.394456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.409955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.409992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.427934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.428119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.442865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.443052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.458846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.458882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.477694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.477728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.490966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.491006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.506108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.506143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.514955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.514991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.530437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.530472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.547102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.547137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.067 [2024-11-29 11:59:10.561219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.067 [2024-11-29 11:59:10.561252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.576881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.576927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.593966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.594000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.610261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.610298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.626471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.626549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.644670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.644705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.659843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.660079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.671701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.671736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.688244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.688300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.704033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.704067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.715475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.715523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.733351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.733394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.748052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.748086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.763763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.763797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.781619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.781660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.796428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.796463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.807665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.807709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.325 [2024-11-29 11:59:10.824897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.325 [2024-11-29 11:59:10.825089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:10.839041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:10.839078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:10.854485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:10.854540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:10.871753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:10.871810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:10.887256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:10.887452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:10.905332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:10.905367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:10.921245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:10.921279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:10.938820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:10.938855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:10.954536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:10.954616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:10.972236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:10.972291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:10.987627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:10.987659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:11.006236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:11.006432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:11.020158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:11.020191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:11.035805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:11.035838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:11.052189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:11.052222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:11.068841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:11.069050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.583 [2024-11-29 11:59:11.084508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.583 [2024-11-29 11:59:11.084727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.102189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.841 [2024-11-29 11:59:11.102233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.117136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.841 [2024-11-29 11:59:11.117179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.133598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.841 [2024-11-29 11:59:11.133632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.150393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.841 [2024-11-29 11:59:11.150428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.166261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.841 [2024-11-29 11:59:11.166308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.184350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.841 [2024-11-29 11:59:11.184702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.198483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.841 [2024-11-29 11:59:11.198567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.214518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.841 [2024-11-29 11:59:11.214552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.230630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.841 [2024-11-29 11:59:11.230662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.249295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.841 [2024-11-29 11:59:11.249347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.265378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.841 [2024-11-29 11:59:11.265417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.281114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.841 [2024-11-29 11:59:11.281168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.298271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.841 [2024-11-29 11:59:11.298324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.314660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.841 [2024-11-29 11:59:11.314715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.331242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.841 [2024-11-29 11:59:11.331308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.841 [2024-11-29 11:59:11.348144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.348358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.363740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.363908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.379731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.379921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.396672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.396863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.413056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.413292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.429856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.430069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.447009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.447219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.462415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.462610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.473270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.473452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.489798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.489972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.506241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.506431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.523894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.524088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.539150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.539401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.557781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.558009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.572668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.572894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.588476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.588711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.099 [2024-11-29 11:59:11.599519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.099 [2024-11-29 11:59:11.599566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.616020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.616203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.631121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.631340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.647414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.647448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.666061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.666252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.680418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.680460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.696155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.696191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.713641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.713674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.730007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.730041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.747880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.747914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.763592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.763654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.780684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.780717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.797260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.797297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.814539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.814605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.830737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.830771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.847457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.847535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.358 [2024-11-29 11:59:11.864869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.358 [2024-11-29 11:59:11.864924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:11.882638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:11.882698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:11.898393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:11.898432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:11.918778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:11.918827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:11.934474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:11.934532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:11.951280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:11.951339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:11.969984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:11.970181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:11.984502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:11.984582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:12.000780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:12.000841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:12.016293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:12.016565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:12.027977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:12.028163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:12.044272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:12.044327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:12.061612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:12.061665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:12.075362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:12.075561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:12.090843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:12.091073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:12.100834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:12.100877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.617 [2024-11-29 11:59:12.115042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.617 [2024-11-29 11:59:12.115077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.130155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.130375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.141884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.141919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.158142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.158195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.174377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.174432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.192459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.192494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.207992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.208026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.225816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.225853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.240285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.240646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.255489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.255589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.271962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.272269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.286656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.286689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.302337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.302543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.318403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.318628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.336772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.336808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.352058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.352125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.369896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.876 [2024-11-29 11:59:12.370129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.876 [2024-11-29 11:59:12.384777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.385040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.400509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.400768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.409771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.409818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.426165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.426205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.445627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.445681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.460115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.460395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.476069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.476146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.492039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.492098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.509224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.509260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.525389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.525423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.541290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.541325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.559713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.559753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.575762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.575807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.592702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.592761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 00:15:07.135 Latency(us) 00:15:07.135 [2024-11-29T11:59:12.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.135 [2024-11-29T11:59:12.646Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:07.135 Nvme1n1 : 5.01 11745.32 91.76 0.00 0.00 10885.29 4081.11 25737.77 00:15:07.135 [2024-11-29T11:59:12.646Z] =================================================================================================================== 00:15:07.135 [2024-11-29T11:59:12.646Z] Total : 11745.32 91.76 0.00 0.00 10885.29 4081.11 25737.77 00:15:07.135 [2024-11-29 11:59:12.603657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.603722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.615606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.615890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.627627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.627659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.135 [2024-11-29 11:59:12.639581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.135 [2024-11-29 11:59:12.639626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 [2024-11-29 11:59:12.651573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.395 [2024-11-29 11:59:12.651776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 [2024-11-29 11:59:12.663582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.395 [2024-11-29 11:59:12.663641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 [2024-11-29 11:59:12.675597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.395 [2024-11-29 11:59:12.675639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 [2024-11-29 11:59:12.687589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.395 [2024-11-29 11:59:12.687830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 [2024-11-29 11:59:12.699645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.395 [2024-11-29 11:59:12.699673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 [2024-11-29 11:59:12.711619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.395 [2024-11-29 11:59:12.711662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 [2024-11-29 11:59:12.723640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.395 [2024-11-29 11:59:12.723870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 [2024-11-29 11:59:12.735636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.395 [2024-11-29 11:59:12.735665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 [2024-11-29 11:59:12.747611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.395 [2024-11-29 11:59:12.747640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 [2024-11-29 11:59:12.759666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.395 [2024-11-29 11:59:12.759857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 [2024-11-29 11:59:12.771626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.395 [2024-11-29 11:59:12.771657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 [2024-11-29 11:59:12.783627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.395 [2024-11-29 11:59:12.783657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 [2024-11-29 11:59:12.795675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.395 [2024-11-29 11:59:12.795699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 [2024-11-29 11:59:12.807664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.395 [2024-11-29 11:59:12.807699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 [2024-11-29 11:59:12.819666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.395 [2024-11-29 11:59:12.819858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.395 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75211) - No such process 00:15:07.395 11:59:12 -- target/zcopy.sh@49 -- # wait 75211 00:15:07.395 11:59:12 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.395 11:59:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.395 11:59:12 -- common/autotest_common.sh@10 -- # set +x 00:15:07.395 11:59:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.395 11:59:12 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:07.395 11:59:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.395 11:59:12 -- common/autotest_common.sh@10 -- # set +x 00:15:07.395 delay0 00:15:07.395 11:59:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.395 11:59:12 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:07.395 11:59:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.395 11:59:12 -- common/autotest_common.sh@10 -- # set +x 00:15:07.395 11:59:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.395 11:59:12 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:07.654 [2024-11-29 11:59:13.020335] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:14.218 Initializing NVMe Controllers 00:15:14.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:14.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:14.218 Initialization complete. Launching workers. 00:15:14.218 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 264, failed: 12578 00:15:14.219 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12751, failed to submit 91 00:15:14.219 success 12669, unsuccess 82, failed 0 00:15:14.219 11:59:19 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:14.219 11:59:19 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:14.219 11:59:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:14.219 11:59:19 -- nvmf/common.sh@116 -- # sync 00:15:14.219 11:59:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:14.219 11:59:19 -- nvmf/common.sh@119 -- # set +e 00:15:14.219 11:59:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:14.219 11:59:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:14.219 rmmod nvme_tcp 00:15:14.219 rmmod nvme_fabrics 00:15:14.219 rmmod nvme_keyring 00:15:14.219 11:59:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:14.219 11:59:19 -- nvmf/common.sh@123 -- # set -e 00:15:14.219 11:59:19 -- nvmf/common.sh@124 -- # return 0 00:15:14.219 11:59:19 -- nvmf/common.sh@477 -- # '[' -n 75055 ']' 00:15:14.219 11:59:19 -- nvmf/common.sh@478 -- # killprocess 75055 00:15:14.219 11:59:19 -- common/autotest_common.sh@936 -- # '[' -z 75055 ']' 00:15:14.219 11:59:19 -- common/autotest_common.sh@940 -- # kill -0 75055 00:15:14.219 11:59:19 -- common/autotest_common.sh@941 -- # uname 00:15:14.219 11:59:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:14.219 11:59:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75055 00:15:14.219 killing process with pid 75055 00:15:14.219 11:59:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:14.219 11:59:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:14.219 11:59:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75055' 00:15:14.219 11:59:19 -- common/autotest_common.sh@955 -- # kill 75055 00:15:14.219 11:59:19 -- common/autotest_common.sh@960 -- # wait 75055 00:15:14.219 11:59:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:14.219 11:59:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:14.219 11:59:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:14.219 11:59:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.219 11:59:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:14.219 11:59:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.219 11:59:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.219 11:59:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.219 11:59:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:14.219 ************************************ 00:15:14.219 END TEST nvmf_zcopy 00:15:14.219 ************************************ 00:15:14.219 00:15:14.219 real 0m25.135s 00:15:14.219 user 0m40.329s 00:15:14.219 sys 0m7.373s 00:15:14.219 11:59:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:14.219 11:59:19 -- common/autotest_common.sh@10 -- # set +x 00:15:14.478 11:59:19 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:14.478 11:59:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:14.478 11:59:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:14.478 11:59:19 -- common/autotest_common.sh@10 -- # set +x 00:15:14.478 ************************************ 00:15:14.478 START TEST nvmf_nmic 00:15:14.478 ************************************ 00:15:14.478 11:59:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:14.478 * Looking for test storage... 00:15:14.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:14.478 11:59:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:14.478 11:59:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:14.478 11:59:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:14.478 11:59:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:14.478 11:59:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:14.478 11:59:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:14.478 11:59:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:14.478 11:59:19 -- scripts/common.sh@335 -- # IFS=.-: 00:15:14.478 11:59:19 -- scripts/common.sh@335 -- # read -ra ver1 00:15:14.478 11:59:19 -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.478 11:59:19 -- scripts/common.sh@336 -- # read -ra ver2 00:15:14.478 11:59:19 -- scripts/common.sh@337 -- # local 'op=<' 00:15:14.478 11:59:19 -- scripts/common.sh@339 -- # ver1_l=2 00:15:14.478 11:59:19 -- scripts/common.sh@340 -- # ver2_l=1 00:15:14.478 11:59:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:14.478 11:59:19 -- scripts/common.sh@343 -- # case "$op" in 00:15:14.478 11:59:19 -- scripts/common.sh@344 -- # : 1 00:15:14.478 11:59:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:14.478 11:59:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.478 11:59:19 -- scripts/common.sh@364 -- # decimal 1 00:15:14.478 11:59:19 -- scripts/common.sh@352 -- # local d=1 00:15:14.478 11:59:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.478 11:59:19 -- scripts/common.sh@354 -- # echo 1 00:15:14.478 11:59:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:14.478 11:59:19 -- scripts/common.sh@365 -- # decimal 2 00:15:14.478 11:59:19 -- scripts/common.sh@352 -- # local d=2 00:15:14.478 11:59:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.478 11:59:19 -- scripts/common.sh@354 -- # echo 2 00:15:14.478 11:59:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:14.478 11:59:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:14.478 11:59:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:14.478 11:59:19 -- scripts/common.sh@367 -- # return 0 00:15:14.478 11:59:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.478 11:59:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:14.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.478 --rc genhtml_branch_coverage=1 00:15:14.478 --rc genhtml_function_coverage=1 00:15:14.478 --rc genhtml_legend=1 00:15:14.478 --rc geninfo_all_blocks=1 00:15:14.478 --rc geninfo_unexecuted_blocks=1 00:15:14.478 00:15:14.478 ' 00:15:14.478 11:59:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:14.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.478 --rc genhtml_branch_coverage=1 00:15:14.478 --rc genhtml_function_coverage=1 00:15:14.478 --rc genhtml_legend=1 00:15:14.478 --rc geninfo_all_blocks=1 00:15:14.478 --rc geninfo_unexecuted_blocks=1 00:15:14.478 00:15:14.478 ' 00:15:14.478 11:59:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:14.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.478 --rc genhtml_branch_coverage=1 00:15:14.478 --rc genhtml_function_coverage=1 00:15:14.478 --rc genhtml_legend=1 00:15:14.478 --rc geninfo_all_blocks=1 00:15:14.478 --rc geninfo_unexecuted_blocks=1 00:15:14.478 00:15:14.478 ' 00:15:14.478 11:59:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:14.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.478 --rc genhtml_branch_coverage=1 00:15:14.478 --rc genhtml_function_coverage=1 00:15:14.478 --rc genhtml_legend=1 00:15:14.478 --rc geninfo_all_blocks=1 00:15:14.478 --rc geninfo_unexecuted_blocks=1 00:15:14.478 00:15:14.478 ' 00:15:14.478 11:59:19 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:14.478 11:59:19 -- nvmf/common.sh@7 -- # uname -s 00:15:14.478 11:59:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.478 11:59:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.478 11:59:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.478 11:59:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.478 11:59:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.478 11:59:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.478 11:59:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.478 11:59:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.478 11:59:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.478 11:59:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.478 11:59:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:15:14.478 11:59:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:15:14.478 11:59:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.478 11:59:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.478 11:59:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:14.478 11:59:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:14.478 11:59:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.478 11:59:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.478 11:59:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.478 11:59:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.478 11:59:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.478 11:59:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.478 11:59:19 -- paths/export.sh@5 -- # export PATH 00:15:14.478 11:59:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.478 11:59:19 -- nvmf/common.sh@46 -- # : 0 00:15:14.478 11:59:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:14.478 11:59:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:14.478 11:59:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:14.479 11:59:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.479 11:59:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.479 11:59:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:14.479 11:59:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:14.479 11:59:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:14.479 11:59:19 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:14.479 11:59:19 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:14.479 11:59:19 -- target/nmic.sh@14 -- # nvmftestinit 00:15:14.479 11:59:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:14.479 11:59:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.479 11:59:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:14.479 11:59:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:14.479 11:59:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:14.479 11:59:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.479 11:59:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.479 11:59:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.479 11:59:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:14.479 11:59:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:14.479 11:59:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:14.479 11:59:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:14.479 11:59:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:14.479 11:59:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:14.479 11:59:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.479 11:59:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.479 11:59:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:14.479 11:59:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:14.479 11:59:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:14.479 11:59:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:14.479 11:59:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:14.479 11:59:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.479 11:59:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:14.479 11:59:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:14.479 11:59:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:14.479 11:59:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:14.479 11:59:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:14.737 11:59:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:14.737 Cannot find device "nvmf_tgt_br" 00:15:14.737 11:59:20 -- nvmf/common.sh@154 -- # true 00:15:14.737 11:59:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:14.737 Cannot find device "nvmf_tgt_br2" 00:15:14.737 11:59:20 -- nvmf/common.sh@155 -- # true 00:15:14.737 11:59:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:14.737 11:59:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:14.737 Cannot find device "nvmf_tgt_br" 00:15:14.737 11:59:20 -- nvmf/common.sh@157 -- # true 00:15:14.737 11:59:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:14.737 Cannot find device "nvmf_tgt_br2" 00:15:14.737 11:59:20 -- nvmf/common.sh@158 -- # true 00:15:14.737 11:59:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:14.737 11:59:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:14.737 11:59:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:14.737 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.737 11:59:20 -- nvmf/common.sh@161 -- # true 00:15:14.737 11:59:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:14.737 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.737 11:59:20 -- nvmf/common.sh@162 -- # true 00:15:14.737 11:59:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:14.737 11:59:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:14.737 11:59:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:14.737 11:59:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:14.737 11:59:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:14.737 11:59:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:14.737 11:59:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:14.737 11:59:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:14.737 11:59:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:14.996 11:59:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:14.996 11:59:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:14.996 11:59:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:14.996 11:59:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:14.996 11:59:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:14.996 11:59:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:14.996 11:59:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:14.996 11:59:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:14.996 11:59:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:14.996 11:59:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:14.996 11:59:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:14.996 11:59:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:14.996 11:59:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:14.996 11:59:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:14.996 11:59:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:14.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:15:14.996 00:15:14.996 --- 10.0.0.2 ping statistics --- 00:15:14.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.996 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:15:14.996 11:59:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:14.996 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:14.996 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:15:14.996 00:15:14.996 --- 10.0.0.3 ping statistics --- 00:15:14.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.996 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:14.996 11:59:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:14.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:14.996 00:15:14.996 --- 10.0.0.1 ping statistics --- 00:15:14.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.996 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:14.996 11:59:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.996 11:59:20 -- nvmf/common.sh@421 -- # return 0 00:15:14.996 11:59:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:14.996 11:59:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.996 11:59:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:14.996 11:59:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:14.996 11:59:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.996 11:59:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:14.996 11:59:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:14.996 11:59:20 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:14.996 11:59:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:14.996 11:59:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:14.996 11:59:20 -- common/autotest_common.sh@10 -- # set +x 00:15:14.996 11:59:20 -- nvmf/common.sh@469 -- # nvmfpid=75545 00:15:14.996 11:59:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:14.996 11:59:20 -- nvmf/common.sh@470 -- # waitforlisten 75545 00:15:14.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.996 11:59:20 -- common/autotest_common.sh@829 -- # '[' -z 75545 ']' 00:15:14.996 11:59:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.996 11:59:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:14.996 11:59:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.996 11:59:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:14.996 11:59:20 -- common/autotest_common.sh@10 -- # set +x 00:15:14.996 [2024-11-29 11:59:20.450396] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:14.996 [2024-11-29 11:59:20.450496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.255 [2024-11-29 11:59:20.590494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:15.255 [2024-11-29 11:59:20.683830] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:15.255 [2024-11-29 11:59:20.684045] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.255 [2024-11-29 11:59:20.684059] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.255 [2024-11-29 11:59:20.684068] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.255 [2024-11-29 11:59:20.684242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.255 [2024-11-29 11:59:20.684556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.255 [2024-11-29 11:59:20.685249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.255 [2024-11-29 11:59:20.685279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.189 11:59:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:16.189 11:59:21 -- common/autotest_common.sh@862 -- # return 0 00:15:16.189 11:59:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:16.189 11:59:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:16.189 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:15:16.189 11:59:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.189 11:59:21 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:16.189 11:59:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.189 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:15:16.189 [2024-11-29 11:59:21.532982] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.189 11:59:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.189 11:59:21 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:16.189 11:59:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.189 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:15:16.189 Malloc0 00:15:16.189 11:59:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.189 11:59:21 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:16.189 11:59:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.189 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:15:16.189 11:59:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.189 11:59:21 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:16.189 11:59:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.189 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:15:16.189 11:59:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.189 11:59:21 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:16.189 11:59:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.189 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:15:16.189 [2024-11-29 11:59:21.602256] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.189 test case1: single bdev can't be used in multiple subsystems 00:15:16.189 11:59:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.189 11:59:21 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:16.190 11:59:21 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:16.190 11:59:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.190 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:15:16.190 11:59:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.190 11:59:21 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:16.190 11:59:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.190 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:15:16.190 11:59:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.190 11:59:21 -- target/nmic.sh@28 -- # nmic_status=0 00:15:16.190 11:59:21 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:16.190 11:59:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.190 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:15:16.190 [2024-11-29 11:59:21.626086] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:16.190 [2024-11-29 11:59:21.626127] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:16.190 [2024-11-29 11:59:21.626139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.190 request: 00:15:16.190 { 00:15:16.190 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:16.190 "namespace": { 00:15:16.190 "bdev_name": "Malloc0" 00:15:16.190 }, 00:15:16.190 "method": "nvmf_subsystem_add_ns", 00:15:16.190 "req_id": 1 00:15:16.190 } 00:15:16.190 Got JSON-RPC error response 00:15:16.190 response: 00:15:16.190 { 00:15:16.190 "code": -32602, 00:15:16.190 "message": "Invalid parameters" 00:15:16.190 } 00:15:16.190 Adding namespace failed - expected result. 00:15:16.190 test case2: host connect to nvmf target in multiple paths 00:15:16.190 11:59:21 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:16.190 11:59:21 -- target/nmic.sh@29 -- # nmic_status=1 00:15:16.190 11:59:21 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:16.190 11:59:21 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:16.190 11:59:21 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:16.190 11:59:21 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:16.190 11:59:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.190 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:15:16.190 [2024-11-29 11:59:21.638292] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:16.190 11:59:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.190 11:59:21 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:16.449 11:59:21 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:16.449 11:59:21 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:16.449 11:59:21 -- common/autotest_common.sh@1187 -- # local i=0 00:15:16.449 11:59:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:16.449 11:59:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:16.449 11:59:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:18.979 11:59:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:18.979 11:59:23 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:18.979 11:59:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:18.979 11:59:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:18.979 11:59:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:18.979 11:59:23 -- common/autotest_common.sh@1197 -- # return 0 00:15:18.979 11:59:23 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:18.979 [global] 00:15:18.979 thread=1 00:15:18.979 invalidate=1 00:15:18.979 rw=write 00:15:18.979 time_based=1 00:15:18.979 runtime=1 00:15:18.979 ioengine=libaio 00:15:18.979 direct=1 00:15:18.979 bs=4096 00:15:18.979 iodepth=1 00:15:18.979 norandommap=0 00:15:18.979 numjobs=1 00:15:18.979 00:15:18.979 verify_dump=1 00:15:18.979 verify_backlog=512 00:15:18.979 verify_state_save=0 00:15:18.979 do_verify=1 00:15:18.979 verify=crc32c-intel 00:15:18.979 [job0] 00:15:18.979 filename=/dev/nvme0n1 00:15:18.979 Could not set queue depth (nvme0n1) 00:15:18.979 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:18.979 fio-3.35 00:15:18.979 Starting 1 thread 00:15:19.913 00:15:19.913 job0: (groupid=0, jobs=1): err= 0: pid=75641: Fri Nov 29 11:59:25 2024 00:15:19.913 read: IOPS=2512, BW=9.81MiB/s (10.3MB/s)(9.82MiB/1001msec) 00:15:19.913 slat (nsec): min=11216, max=48256, avg=14419.13, stdev=4128.22 00:15:19.913 clat (usec): min=145, max=1010, avg=218.84, stdev=33.86 00:15:19.913 lat (usec): min=160, max=1023, avg=233.26, stdev=33.82 00:15:19.913 clat percentiles (usec): 00:15:19.913 | 1.00th=[ 161], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 194], 00:15:19.913 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 225], 00:15:19.913 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 269], 00:15:19.913 | 99.00th=[ 310], 99.50th=[ 338], 99.90th=[ 404], 99.95th=[ 474], 00:15:19.913 | 99.99th=[ 1012] 00:15:19.913 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:19.913 slat (usec): min=16, max=104, avg=21.84, stdev= 6.66 00:15:19.913 clat (usec): min=91, max=4924, avg=136.16, stdev=128.58 00:15:19.913 lat (usec): min=110, max=4961, avg=158.00, stdev=129.54 00:15:19.913 clat percentiles (usec): 00:15:19.914 | 1.00th=[ 96], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 113], 00:15:19.914 | 30.00th=[ 119], 40.00th=[ 125], 50.00th=[ 130], 60.00th=[ 135], 00:15:19.914 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 167], 00:15:19.914 | 99.00th=[ 198], 99.50th=[ 221], 99.90th=[ 2376], 99.95th=[ 3261], 00:15:19.914 | 99.99th=[ 4948] 00:15:19.914 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:15:19.914 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:19.914 lat (usec) : 100=1.62%, 250=92.02%, 500=6.23%, 750=0.04% 00:15:19.914 lat (msec) : 2=0.04%, 4=0.04%, 10=0.02% 00:15:19.914 cpu : usr=1.80%, sys=7.60%, ctx=5075, majf=0, minf=5 00:15:19.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:19.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.914 issued rwts: total=2515,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:19.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:19.914 00:15:19.914 Run status group 0 (all jobs): 00:15:19.914 READ: bw=9.81MiB/s (10.3MB/s), 9.81MiB/s-9.81MiB/s (10.3MB/s-10.3MB/s), io=9.82MiB (10.3MB), run=1001-1001msec 00:15:19.914 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:15:19.914 00:15:19.914 Disk stats (read/write): 00:15:19.914 nvme0n1: ios=2118/2560, merge=0/0, ticks=495/367, in_queue=862, util=90.66% 00:15:19.914 11:59:25 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:19.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:19.914 11:59:25 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:19.914 11:59:25 -- common/autotest_common.sh@1208 -- # local i=0 00:15:19.914 11:59:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:19.914 11:59:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.914 11:59:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:19.914 11:59:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.914 11:59:25 -- common/autotest_common.sh@1220 -- # return 0 00:15:19.914 11:59:25 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:19.914 11:59:25 -- target/nmic.sh@53 -- # nvmftestfini 00:15:19.914 11:59:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:19.914 11:59:25 -- nvmf/common.sh@116 -- # sync 00:15:19.914 11:59:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:19.914 11:59:25 -- nvmf/common.sh@119 -- # set +e 00:15:19.914 11:59:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:19.914 11:59:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:19.914 rmmod nvme_tcp 00:15:19.914 rmmod nvme_fabrics 00:15:19.914 rmmod nvme_keyring 00:15:20.173 11:59:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:20.173 11:59:25 -- nvmf/common.sh@123 -- # set -e 00:15:20.173 11:59:25 -- nvmf/common.sh@124 -- # return 0 00:15:20.173 11:59:25 -- nvmf/common.sh@477 -- # '[' -n 75545 ']' 00:15:20.173 11:59:25 -- nvmf/common.sh@478 -- # killprocess 75545 00:15:20.173 11:59:25 -- common/autotest_common.sh@936 -- # '[' -z 75545 ']' 00:15:20.173 11:59:25 -- common/autotest_common.sh@940 -- # kill -0 75545 00:15:20.173 11:59:25 -- common/autotest_common.sh@941 -- # uname 00:15:20.173 11:59:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:20.173 11:59:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75545 00:15:20.173 killing process with pid 75545 00:15:20.173 11:59:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:20.173 11:59:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:20.173 11:59:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75545' 00:15:20.173 11:59:25 -- common/autotest_common.sh@955 -- # kill 75545 00:15:20.173 11:59:25 -- common/autotest_common.sh@960 -- # wait 75545 00:15:20.432 11:59:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:20.432 11:59:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:20.432 11:59:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:20.432 11:59:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.432 11:59:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:20.432 11:59:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.432 11:59:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.432 11:59:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.432 11:59:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:20.432 00:15:20.432 real 0m6.000s 00:15:20.432 user 0m19.217s 00:15:20.432 sys 0m2.063s 00:15:20.432 11:59:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:20.432 11:59:25 -- common/autotest_common.sh@10 -- # set +x 00:15:20.432 ************************************ 00:15:20.432 END TEST nvmf_nmic 00:15:20.432 ************************************ 00:15:20.432 11:59:25 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:20.432 11:59:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:20.432 11:59:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:20.432 11:59:25 -- common/autotest_common.sh@10 -- # set +x 00:15:20.432 ************************************ 00:15:20.432 START TEST nvmf_fio_target 00:15:20.432 ************************************ 00:15:20.432 11:59:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:20.432 * Looking for test storage... 00:15:20.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:20.432 11:59:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:20.432 11:59:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:20.432 11:59:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:20.691 11:59:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:20.691 11:59:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:20.691 11:59:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:20.691 11:59:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:20.691 11:59:25 -- scripts/common.sh@335 -- # IFS=.-: 00:15:20.691 11:59:25 -- scripts/common.sh@335 -- # read -ra ver1 00:15:20.691 11:59:25 -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.691 11:59:25 -- scripts/common.sh@336 -- # read -ra ver2 00:15:20.691 11:59:25 -- scripts/common.sh@337 -- # local 'op=<' 00:15:20.691 11:59:25 -- scripts/common.sh@339 -- # ver1_l=2 00:15:20.691 11:59:25 -- scripts/common.sh@340 -- # ver2_l=1 00:15:20.691 11:59:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:20.691 11:59:25 -- scripts/common.sh@343 -- # case "$op" in 00:15:20.691 11:59:25 -- scripts/common.sh@344 -- # : 1 00:15:20.691 11:59:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:20.691 11:59:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.691 11:59:25 -- scripts/common.sh@364 -- # decimal 1 00:15:20.691 11:59:25 -- scripts/common.sh@352 -- # local d=1 00:15:20.691 11:59:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.691 11:59:25 -- scripts/common.sh@354 -- # echo 1 00:15:20.691 11:59:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:20.691 11:59:25 -- scripts/common.sh@365 -- # decimal 2 00:15:20.691 11:59:25 -- scripts/common.sh@352 -- # local d=2 00:15:20.691 11:59:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.691 11:59:25 -- scripts/common.sh@354 -- # echo 2 00:15:20.691 11:59:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:20.691 11:59:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:20.691 11:59:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:20.691 11:59:25 -- scripts/common.sh@367 -- # return 0 00:15:20.691 11:59:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.691 11:59:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:20.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.691 --rc genhtml_branch_coverage=1 00:15:20.691 --rc genhtml_function_coverage=1 00:15:20.691 --rc genhtml_legend=1 00:15:20.691 --rc geninfo_all_blocks=1 00:15:20.691 --rc geninfo_unexecuted_blocks=1 00:15:20.691 00:15:20.691 ' 00:15:20.691 11:59:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:20.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.691 --rc genhtml_branch_coverage=1 00:15:20.691 --rc genhtml_function_coverage=1 00:15:20.691 --rc genhtml_legend=1 00:15:20.691 --rc geninfo_all_blocks=1 00:15:20.691 --rc geninfo_unexecuted_blocks=1 00:15:20.691 00:15:20.691 ' 00:15:20.691 11:59:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:20.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.691 --rc genhtml_branch_coverage=1 00:15:20.691 --rc genhtml_function_coverage=1 00:15:20.691 --rc genhtml_legend=1 00:15:20.691 --rc geninfo_all_blocks=1 00:15:20.691 --rc geninfo_unexecuted_blocks=1 00:15:20.691 00:15:20.691 ' 00:15:20.691 11:59:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:20.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.691 --rc genhtml_branch_coverage=1 00:15:20.691 --rc genhtml_function_coverage=1 00:15:20.691 --rc genhtml_legend=1 00:15:20.691 --rc geninfo_all_blocks=1 00:15:20.691 --rc geninfo_unexecuted_blocks=1 00:15:20.691 00:15:20.691 ' 00:15:20.691 11:59:25 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:20.691 11:59:25 -- nvmf/common.sh@7 -- # uname -s 00:15:20.691 11:59:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.691 11:59:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.691 11:59:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.691 11:59:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.691 11:59:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.691 11:59:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.691 11:59:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.691 11:59:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.691 11:59:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.691 11:59:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.691 11:59:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:15:20.691 11:59:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:15:20.691 11:59:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.691 11:59:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.691 11:59:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:20.691 11:59:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.691 11:59:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.691 11:59:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.691 11:59:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.691 11:59:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.691 11:59:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.691 11:59:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.691 11:59:26 -- paths/export.sh@5 -- # export PATH 00:15:20.691 11:59:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.691 11:59:26 -- nvmf/common.sh@46 -- # : 0 00:15:20.691 11:59:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:20.691 11:59:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:20.691 11:59:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:20.691 11:59:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.691 11:59:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.691 11:59:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:20.691 11:59:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:20.691 11:59:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:20.691 11:59:26 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:20.691 11:59:26 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:20.691 11:59:26 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:20.691 11:59:26 -- target/fio.sh@16 -- # nvmftestinit 00:15:20.691 11:59:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:20.691 11:59:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.691 11:59:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:20.691 11:59:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:20.691 11:59:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:20.691 11:59:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.691 11:59:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.691 11:59:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.691 11:59:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:20.691 11:59:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:20.691 11:59:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:20.692 11:59:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:20.692 11:59:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:20.692 11:59:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:20.692 11:59:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.692 11:59:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.692 11:59:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:20.692 11:59:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:20.692 11:59:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:20.692 11:59:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:20.692 11:59:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:20.692 11:59:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.692 11:59:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:20.692 11:59:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:20.692 11:59:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:20.692 11:59:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:20.692 11:59:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:20.692 11:59:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:20.692 Cannot find device "nvmf_tgt_br" 00:15:20.692 11:59:26 -- nvmf/common.sh@154 -- # true 00:15:20.692 11:59:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:20.692 Cannot find device "nvmf_tgt_br2" 00:15:20.692 11:59:26 -- nvmf/common.sh@155 -- # true 00:15:20.692 11:59:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:20.692 11:59:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:20.692 Cannot find device "nvmf_tgt_br" 00:15:20.692 11:59:26 -- nvmf/common.sh@157 -- # true 00:15:20.692 11:59:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:20.692 Cannot find device "nvmf_tgt_br2" 00:15:20.692 11:59:26 -- nvmf/common.sh@158 -- # true 00:15:20.692 11:59:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:20.692 11:59:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:20.692 11:59:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:20.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.692 11:59:26 -- nvmf/common.sh@161 -- # true 00:15:20.692 11:59:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:20.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.692 11:59:26 -- nvmf/common.sh@162 -- # true 00:15:20.692 11:59:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:20.692 11:59:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:20.692 11:59:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:20.692 11:59:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:20.692 11:59:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:20.980 11:59:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:20.980 11:59:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:20.980 11:59:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:20.980 11:59:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:20.980 11:59:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:20.980 11:59:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:20.980 11:59:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:20.980 11:59:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:20.980 11:59:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:20.980 11:59:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:20.980 11:59:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:20.980 11:59:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:20.980 11:59:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:20.980 11:59:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:20.980 11:59:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:20.980 11:59:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:20.980 11:59:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:20.980 11:59:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:20.980 11:59:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:20.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:15:20.980 00:15:20.980 --- 10.0.0.2 ping statistics --- 00:15:20.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.980 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:20.980 11:59:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:20.980 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:20.980 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:15:20.980 00:15:20.980 --- 10.0.0.3 ping statistics --- 00:15:20.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.980 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:20.980 11:59:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:20.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:15:20.980 00:15:20.980 --- 10.0.0.1 ping statistics --- 00:15:20.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.980 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:20.980 11:59:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.980 11:59:26 -- nvmf/common.sh@421 -- # return 0 00:15:20.980 11:59:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:20.980 11:59:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.980 11:59:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:20.980 11:59:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:20.980 11:59:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.980 11:59:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:20.980 11:59:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:20.980 11:59:26 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:20.980 11:59:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:20.980 11:59:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:20.980 11:59:26 -- common/autotest_common.sh@10 -- # set +x 00:15:20.980 11:59:26 -- nvmf/common.sh@469 -- # nvmfpid=75831 00:15:20.980 11:59:26 -- nvmf/common.sh@470 -- # waitforlisten 75831 00:15:20.980 11:59:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:20.980 11:59:26 -- common/autotest_common.sh@829 -- # '[' -z 75831 ']' 00:15:20.980 11:59:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.980 11:59:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.980 11:59:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.980 11:59:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.980 11:59:26 -- common/autotest_common.sh@10 -- # set +x 00:15:20.980 [2024-11-29 11:59:26.445365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:20.980 [2024-11-29 11:59:26.445484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.238 [2024-11-29 11:59:26.589774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:21.238 [2024-11-29 11:59:26.688141] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:21.238 [2024-11-29 11:59:26.688396] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.238 [2024-11-29 11:59:26.688423] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.238 [2024-11-29 11:59:26.688441] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.238 [2024-11-29 11:59:26.688614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.238 [2024-11-29 11:59:26.689098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.238 [2024-11-29 11:59:26.689224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:21.238 [2024-11-29 11:59:26.689382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.175 11:59:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:22.175 11:59:27 -- common/autotest_common.sh@862 -- # return 0 00:15:22.175 11:59:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:22.175 11:59:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:22.175 11:59:27 -- common/autotest_common.sh@10 -- # set +x 00:15:22.175 11:59:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.175 11:59:27 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:22.434 [2024-11-29 11:59:27.717692] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.434 11:59:27 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:22.693 11:59:28 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:22.693 11:59:28 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:22.951 11:59:28 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:22.951 11:59:28 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:23.520 11:59:28 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:23.520 11:59:28 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:23.520 11:59:29 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:23.520 11:59:29 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:24.087 11:59:29 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:24.346 11:59:29 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:24.346 11:59:29 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:24.605 11:59:29 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:24.605 11:59:29 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:24.863 11:59:30 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:24.863 11:59:30 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:25.121 11:59:30 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:25.380 11:59:30 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:25.380 11:59:30 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:25.644 11:59:30 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:25.644 11:59:30 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:25.902 11:59:31 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:26.161 [2024-11-29 11:59:31.480433] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.161 11:59:31 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:26.419 11:59:31 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:26.679 11:59:31 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:26.679 11:59:32 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:26.679 11:59:32 -- common/autotest_common.sh@1187 -- # local i=0 00:15:26.679 11:59:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:26.679 11:59:32 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:15:26.679 11:59:32 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:15:26.679 11:59:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:29.211 11:59:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:29.211 11:59:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:29.211 11:59:34 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:29.211 11:59:34 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:15:29.211 11:59:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:29.211 11:59:34 -- common/autotest_common.sh@1197 -- # return 0 00:15:29.211 11:59:34 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:29.211 [global] 00:15:29.211 thread=1 00:15:29.211 invalidate=1 00:15:29.211 rw=write 00:15:29.211 time_based=1 00:15:29.211 runtime=1 00:15:29.211 ioengine=libaio 00:15:29.211 direct=1 00:15:29.211 bs=4096 00:15:29.211 iodepth=1 00:15:29.211 norandommap=0 00:15:29.211 numjobs=1 00:15:29.211 00:15:29.211 verify_dump=1 00:15:29.211 verify_backlog=512 00:15:29.211 verify_state_save=0 00:15:29.211 do_verify=1 00:15:29.211 verify=crc32c-intel 00:15:29.211 [job0] 00:15:29.211 filename=/dev/nvme0n1 00:15:29.211 [job1] 00:15:29.211 filename=/dev/nvme0n2 00:15:29.211 [job2] 00:15:29.211 filename=/dev/nvme0n3 00:15:29.211 [job3] 00:15:29.211 filename=/dev/nvme0n4 00:15:29.211 Could not set queue depth (nvme0n1) 00:15:29.211 Could not set queue depth (nvme0n2) 00:15:29.211 Could not set queue depth (nvme0n3) 00:15:29.211 Could not set queue depth (nvme0n4) 00:15:29.211 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.211 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.211 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.211 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.211 fio-3.35 00:15:29.211 Starting 4 threads 00:15:30.154 00:15:30.154 job0: (groupid=0, jobs=1): err= 0: pid=76023: Fri Nov 29 11:59:35 2024 00:15:30.154 read: IOPS=1674, BW=6697KiB/s (6858kB/s)(6704KiB/1001msec) 00:15:30.154 slat (nsec): min=11356, max=90457, avg=15996.14, stdev=5578.37 00:15:30.154 clat (usec): min=152, max=668, avg=254.41, stdev=76.85 00:15:30.154 lat (usec): min=165, max=688, avg=270.40, stdev=78.34 00:15:30.154 clat percentiles (usec): 00:15:30.154 | 1.00th=[ 169], 5.00th=[ 184], 10.00th=[ 196], 20.00th=[ 208], 00:15:30.154 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 245], 00:15:30.154 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 318], 95.00th=[ 449], 00:15:30.154 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 644], 99.95th=[ 668], 00:15:30.154 | 99.99th=[ 668] 00:15:30.154 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:30.154 slat (nsec): min=12475, max=92119, avg=25220.33, stdev=7846.73 00:15:30.154 clat (usec): min=87, max=926, avg=238.24, stdev=88.57 00:15:30.154 lat (usec): min=117, max=949, avg=263.46, stdev=89.95 00:15:30.154 clat percentiles (usec): 00:15:30.154 | 1.00th=[ 130], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 176], 00:15:30.154 | 30.00th=[ 186], 40.00th=[ 196], 50.00th=[ 206], 60.00th=[ 221], 00:15:30.154 | 70.00th=[ 241], 80.00th=[ 310], 90.00th=[ 388], 95.00th=[ 433], 00:15:30.154 | 99.00th=[ 478], 99.50th=[ 494], 99.90th=[ 537], 99.95th=[ 717], 00:15:30.154 | 99.99th=[ 930] 00:15:30.154 bw ( KiB/s): min= 8496, max= 8496, per=24.76%, avg=8496.00, stdev= 0.00, samples=1 00:15:30.154 iops : min= 2124, max= 2124, avg=2124.00, stdev= 0.00, samples=1 00:15:30.154 lat (usec) : 100=0.03%, 250=68.90%, 500=29.62%, 750=1.42%, 1000=0.03% 00:15:30.154 cpu : usr=2.10%, sys=5.90%, ctx=3725, majf=0, minf=11 00:15:30.154 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.154 issued rwts: total=1676,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.154 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.154 job1: (groupid=0, jobs=1): err= 0: pid=76024: Fri Nov 29 11:59:35 2024 00:15:30.154 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:30.154 slat (nsec): min=12476, max=97831, avg=16263.38, stdev=5210.52 00:15:30.154 clat (usec): min=135, max=385, avg=221.68, stdev=31.92 00:15:30.154 lat (usec): min=149, max=398, avg=237.94, stdev=32.22 00:15:30.154 clat percentiles (usec): 00:15:30.154 | 1.00th=[ 153], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 194], 00:15:30.154 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 229], 00:15:30.154 | 70.00th=[ 237], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 277], 00:15:30.154 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 355], 99.95th=[ 359], 00:15:30.154 | 99.99th=[ 388] 00:15:30.154 write: IOPS=2439, BW=9758KiB/s (9992kB/s)(9768KiB/1001msec); 0 zone resets 00:15:30.154 slat (nsec): min=18534, max=93077, avg=24561.57, stdev=6740.80 00:15:30.154 clat (usec): min=93, max=1985, avg=182.04, stdev=63.59 00:15:30.154 lat (usec): min=114, max=2020, avg=206.60, stdev=64.22 00:15:30.154 clat percentiles (usec): 00:15:30.154 | 1.00th=[ 108], 5.00th=[ 125], 10.00th=[ 139], 20.00th=[ 153], 00:15:30.154 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 180], 60.00th=[ 188], 00:15:30.154 | 70.00th=[ 198], 80.00th=[ 208], 90.00th=[ 223], 95.00th=[ 241], 00:15:30.154 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 775], 99.95th=[ 1958], 00:15:30.154 | 99.99th=[ 1991] 00:15:30.154 bw ( KiB/s): min= 9040, max= 9040, per=26.35%, avg=9040.00, stdev= 0.00, samples=1 00:15:30.154 iops : min= 2260, max= 2260, avg=2260.00, stdev= 0.00, samples=1 00:15:30.154 lat (usec) : 100=0.16%, 250=89.53%, 500=10.22%, 750=0.02%, 1000=0.02% 00:15:30.154 lat (msec) : 2=0.04% 00:15:30.155 cpu : usr=1.60%, sys=7.50%, ctx=4490, majf=0, minf=7 00:15:30.155 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.155 issued rwts: total=2048,2442,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.155 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.155 job2: (groupid=0, jobs=1): err= 0: pid=76025: Fri Nov 29 11:59:35 2024 00:15:30.155 read: IOPS=1896, BW=7584KiB/s (7766kB/s)(7592KiB/1001msec) 00:15:30.155 slat (nsec): min=11500, max=72056, avg=17111.71, stdev=5783.98 00:15:30.155 clat (usec): min=154, max=3055, avg=250.76, stdev=81.63 00:15:30.155 lat (usec): min=167, max=3069, avg=267.87, stdev=81.94 00:15:30.155 clat percentiles (usec): 00:15:30.155 | 1.00th=[ 172], 5.00th=[ 190], 10.00th=[ 200], 20.00th=[ 215], 00:15:30.155 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 243], 60.00th=[ 253], 00:15:30.155 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 326], 00:15:30.155 | 99.00th=[ 420], 99.50th=[ 474], 99.90th=[ 963], 99.95th=[ 3064], 00:15:30.155 | 99.99th=[ 3064] 00:15:30.155 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:30.155 slat (usec): min=18, max=165, avg=27.23, stdev= 8.75 00:15:30.155 clat (usec): min=120, max=380, avg=209.10, stdev=32.41 00:15:30.155 lat (usec): min=141, max=402, avg=236.33, stdev=32.85 00:15:30.155 clat percentiles (usec): 00:15:30.155 | 1.00th=[ 145], 5.00th=[ 159], 10.00th=[ 169], 20.00th=[ 182], 00:15:30.155 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 217], 00:15:30.155 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 269], 00:15:30.155 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 314], 99.95th=[ 314], 00:15:30.155 | 99.99th=[ 383] 00:15:30.155 bw ( KiB/s): min= 8192, max= 8192, per=23.88%, avg=8192.00, stdev= 0.00, samples=2 00:15:30.155 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:15:30.155 lat (usec) : 250=73.62%, 500=26.20%, 750=0.10%, 1000=0.05% 00:15:30.155 lat (msec) : 4=0.03% 00:15:30.155 cpu : usr=2.00%, sys=6.90%, ctx=3946, majf=0, minf=13 00:15:30.155 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.155 issued rwts: total=1898,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.155 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.155 job3: (groupid=0, jobs=1): err= 0: pid=76026: Fri Nov 29 11:59:35 2024 00:15:30.155 read: IOPS=1550, BW=6202KiB/s (6351kB/s)(6208KiB/1001msec) 00:15:30.155 slat (nsec): min=10877, max=80571, avg=18489.17, stdev=6616.07 00:15:30.155 clat (usec): min=166, max=1194, avg=259.04, stdev=69.62 00:15:30.155 lat (usec): min=182, max=1205, avg=277.53, stdev=70.34 00:15:30.155 clat percentiles (usec): 00:15:30.155 | 1.00th=[ 180], 5.00th=[ 194], 10.00th=[ 204], 20.00th=[ 217], 00:15:30.155 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 245], 60.00th=[ 255], 00:15:30.155 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 314], 95.00th=[ 396], 00:15:30.155 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 652], 99.95th=[ 1188], 00:15:30.155 | 99.99th=[ 1188] 00:15:30.155 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:30.155 slat (usec): min=15, max=203, avg=30.07, stdev=11.46 00:15:30.155 clat (usec): min=115, max=547, avg=244.08, stdev=85.13 00:15:30.155 lat (usec): min=135, max=579, avg=274.15, stdev=90.49 00:15:30.155 clat percentiles (usec): 00:15:30.155 | 1.00th=[ 147], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 178], 00:15:30.155 | 30.00th=[ 190], 40.00th=[ 202], 50.00th=[ 212], 60.00th=[ 227], 00:15:30.155 | 70.00th=[ 251], 80.00th=[ 330], 90.00th=[ 392], 95.00th=[ 424], 00:15:30.155 | 99.00th=[ 474], 99.50th=[ 498], 99.90th=[ 515], 99.95th=[ 519], 00:15:30.155 | 99.99th=[ 545] 00:15:30.155 bw ( KiB/s): min= 8192, max= 8192, per=23.88%, avg=8192.00, stdev= 0.00, samples=1 00:15:30.155 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:30.155 lat (usec) : 250=63.67%, 500=35.14%, 750=1.17% 00:15:30.155 lat (msec) : 2=0.03% 00:15:30.155 cpu : usr=1.60%, sys=7.30%, ctx=3603, majf=0, minf=13 00:15:30.155 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.155 issued rwts: total=1552,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.155 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.155 00:15:30.155 Run status group 0 (all jobs): 00:15:30.155 READ: bw=28.0MiB/s (29.4MB/s), 6202KiB/s-8184KiB/s (6351kB/s-8380kB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:15:30.155 WRITE: bw=33.5MiB/s (35.1MB/s), 8184KiB/s-9758KiB/s (8380kB/s-9992kB/s), io=33.5MiB (35.2MB), run=1001-1001msec 00:15:30.155 00:15:30.155 Disk stats (read/write): 00:15:30.155 nvme0n1: ios=1586/1872, merge=0/0, ticks=413/437, in_queue=850, util=89.18% 00:15:30.155 nvme0n2: ios=1773/2048, merge=0/0, ticks=414/405, in_queue=819, util=87.84% 00:15:30.155 nvme0n3: ios=1536/1813, merge=0/0, ticks=405/401, in_queue=806, util=89.21% 00:15:30.155 nvme0n4: ios=1536/1725, merge=0/0, ticks=396/392, in_queue=788, util=89.77% 00:15:30.155 11:59:35 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:30.155 [global] 00:15:30.155 thread=1 00:15:30.155 invalidate=1 00:15:30.155 rw=randwrite 00:15:30.155 time_based=1 00:15:30.155 runtime=1 00:15:30.155 ioengine=libaio 00:15:30.155 direct=1 00:15:30.155 bs=4096 00:15:30.155 iodepth=1 00:15:30.155 norandommap=0 00:15:30.155 numjobs=1 00:15:30.155 00:15:30.155 verify_dump=1 00:15:30.155 verify_backlog=512 00:15:30.155 verify_state_save=0 00:15:30.155 do_verify=1 00:15:30.155 verify=crc32c-intel 00:15:30.155 [job0] 00:15:30.155 filename=/dev/nvme0n1 00:15:30.155 [job1] 00:15:30.155 filename=/dev/nvme0n2 00:15:30.155 [job2] 00:15:30.155 filename=/dev/nvme0n3 00:15:30.155 [job3] 00:15:30.155 filename=/dev/nvme0n4 00:15:30.155 Could not set queue depth (nvme0n1) 00:15:30.155 Could not set queue depth (nvme0n2) 00:15:30.155 Could not set queue depth (nvme0n3) 00:15:30.155 Could not set queue depth (nvme0n4) 00:15:30.413 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:30.413 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:30.413 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:30.413 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:30.413 fio-3.35 00:15:30.413 Starting 4 threads 00:15:31.790 00:15:31.790 job0: (groupid=0, jobs=1): err= 0: pid=76079: Fri Nov 29 11:59:36 2024 00:15:31.790 read: IOPS=1091, BW=4368KiB/s (4472kB/s)(4372KiB/1001msec) 00:15:31.790 slat (nsec): min=15133, max=63513, avg=27386.42, stdev=6504.10 00:15:31.790 clat (usec): min=227, max=913, avg=412.19, stdev=62.87 00:15:31.790 lat (usec): min=252, max=946, avg=439.57, stdev=63.15 00:15:31.790 clat percentiles (usec): 00:15:31.790 | 1.00th=[ 293], 5.00th=[ 326], 10.00th=[ 347], 20.00th=[ 367], 00:15:31.790 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 420], 00:15:31.790 | 70.00th=[ 441], 80.00th=[ 461], 90.00th=[ 482], 95.00th=[ 510], 00:15:31.790 | 99.00th=[ 611], 99.50th=[ 660], 99.90th=[ 824], 99.95th=[ 914], 00:15:31.790 | 99.99th=[ 914] 00:15:31.790 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:31.790 slat (usec): min=16, max=100, avg=37.32, stdev= 8.88 00:15:31.790 clat (usec): min=149, max=1091, avg=294.85, stdev=78.85 00:15:31.790 lat (usec): min=176, max=1135, avg=332.17, stdev=80.75 00:15:31.790 clat percentiles (usec): 00:15:31.790 | 1.00th=[ 167], 5.00th=[ 188], 10.00th=[ 200], 20.00th=[ 223], 00:15:31.790 | 30.00th=[ 241], 40.00th=[ 265], 50.00th=[ 285], 60.00th=[ 306], 00:15:31.790 | 70.00th=[ 338], 80.00th=[ 379], 90.00th=[ 404], 95.00th=[ 420], 00:15:31.790 | 99.00th=[ 465], 99.50th=[ 486], 99.90th=[ 562], 99.95th=[ 1090], 00:15:31.790 | 99.99th=[ 1090] 00:15:31.790 bw ( KiB/s): min= 5956, max= 5956, per=20.90%, avg=5956.00, stdev= 0.00, samples=1 00:15:31.790 iops : min= 1489, max= 1489, avg=1489.00, stdev= 0.00, samples=1 00:15:31.790 lat (usec) : 250=19.78%, 500=77.25%, 750=2.78%, 1000=0.15% 00:15:31.790 lat (msec) : 2=0.04% 00:15:31.790 cpu : usr=2.00%, sys=7.20%, ctx=2629, majf=0, minf=17 00:15:31.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:31.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.790 issued rwts: total=1093,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:31.790 job1: (groupid=0, jobs=1): err= 0: pid=76080: Fri Nov 29 11:59:36 2024 00:15:31.790 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:15:31.790 slat (nsec): min=18805, max=87891, avg=35126.12, stdev=11978.55 00:15:31.790 clat (usec): min=258, max=1692, avg=467.69, stdev=96.97 00:15:31.790 lat (usec): min=293, max=1725, avg=502.81, stdev=101.14 00:15:31.790 clat percentiles (usec): 00:15:31.790 | 1.00th=[ 326], 5.00th=[ 351], 10.00th=[ 371], 20.00th=[ 392], 00:15:31.790 | 30.00th=[ 408], 40.00th=[ 429], 50.00th=[ 445], 60.00th=[ 469], 00:15:31.790 | 70.00th=[ 510], 80.00th=[ 553], 90.00th=[ 594], 95.00th=[ 619], 00:15:31.790 | 99.00th=[ 685], 99.50th=[ 734], 99.90th=[ 1287], 99.95th=[ 1696], 00:15:31.790 | 99.99th=[ 1696] 00:15:31.790 write: IOPS=1089, BW=4360KiB/s (4464kB/s)(4364KiB/1001msec); 0 zone resets 00:15:31.790 slat (usec): min=27, max=121, avg=46.91, stdev=11.95 00:15:31.790 clat (usec): min=161, max=3501, avg=389.23, stdev=130.53 00:15:31.790 lat (usec): min=192, max=3534, avg=436.15, stdev=133.18 00:15:31.790 clat percentiles (usec): 00:15:31.790 | 1.00th=[ 192], 5.00th=[ 265], 10.00th=[ 289], 20.00th=[ 314], 00:15:31.790 | 30.00th=[ 338], 40.00th=[ 359], 50.00th=[ 375], 60.00th=[ 396], 00:15:31.790 | 70.00th=[ 416], 80.00th=[ 453], 90.00th=[ 515], 95.00th=[ 562], 00:15:31.790 | 99.00th=[ 627], 99.50th=[ 644], 99.90th=[ 709], 99.95th=[ 3490], 00:15:31.790 | 99.99th=[ 3490] 00:15:31.790 bw ( KiB/s): min= 4463, max= 4463, per=15.66%, avg=4463.00, stdev= 0.00, samples=1 00:15:31.790 iops : min= 1115, max= 1115, avg=1115.00, stdev= 0.00, samples=1 00:15:31.790 lat (usec) : 250=2.03%, 500=75.98%, 750=21.70%, 1000=0.14% 00:15:31.790 lat (msec) : 2=0.09%, 4=0.05% 00:15:31.790 cpu : usr=2.60%, sys=6.40%, ctx=2115, majf=0, minf=15 00:15:31.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:31.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.790 issued rwts: total=1024,1091,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:31.790 job2: (groupid=0, jobs=1): err= 0: pid=76081: Fri Nov 29 11:59:36 2024 00:15:31.790 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:31.790 slat (usec): min=12, max=128, avg=17.04, stdev= 6.04 00:15:31.790 clat (usec): min=150, max=3030, avg=224.03, stdev=99.63 00:15:31.790 lat (usec): min=165, max=3062, avg=241.07, stdev=100.40 00:15:31.790 clat percentiles (usec): 00:15:31.790 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 192], 00:15:31.790 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 223], 00:15:31.790 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 262], 95.00th=[ 281], 00:15:31.790 | 99.00th=[ 388], 99.50th=[ 570], 99.90th=[ 1483], 99.95th=[ 2245], 00:15:31.790 | 99.99th=[ 3032] 00:15:31.790 write: IOPS=2411, BW=9646KiB/s (9878kB/s)(9656KiB/1001msec); 0 zone resets 00:15:31.790 slat (usec): min=18, max=110, avg=27.79, stdev=10.62 00:15:31.790 clat (usec): min=102, max=1013, avg=178.03, stdev=40.90 00:15:31.790 lat (usec): min=126, max=1058, avg=205.82, stdev=43.83 00:15:31.790 clat percentiles (usec): 00:15:31.790 | 1.00th=[ 118], 5.00th=[ 128], 10.00th=[ 137], 20.00th=[ 147], 00:15:31.790 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 182], 00:15:31.790 | 70.00th=[ 192], 80.00th=[ 204], 90.00th=[ 225], 95.00th=[ 249], 00:15:31.790 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 416], 99.95th=[ 453], 00:15:31.790 | 99.99th=[ 1012] 00:15:31.790 bw ( KiB/s): min= 8471, max= 8471, per=29.72%, avg=8471.00, stdev= 0.00, samples=1 00:15:31.790 iops : min= 2117, max= 2117, avg=2117.00, stdev= 0.00, samples=1 00:15:31.790 lat (usec) : 250=90.65%, 500=9.01%, 750=0.16%, 1000=0.04% 00:15:31.790 lat (msec) : 2=0.09%, 4=0.04% 00:15:31.790 cpu : usr=2.30%, sys=7.80%, ctx=4463, majf=0, minf=5 00:15:31.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:31.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.790 issued rwts: total=2048,2414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:31.790 job3: (groupid=0, jobs=1): err= 0: pid=76082: Fri Nov 29 11:59:36 2024 00:15:31.790 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:31.790 slat (nsec): min=13507, max=59595, avg=17807.88, stdev=5121.82 00:15:31.790 clat (usec): min=170, max=7961, avg=246.21, stdev=200.63 00:15:31.790 lat (usec): min=185, max=7977, avg=264.02, stdev=201.01 00:15:31.790 clat percentiles (usec): 00:15:31.790 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 212], 00:15:31.790 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 241], 00:15:31.790 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 302], 00:15:31.790 | 99.00th=[ 355], 99.50th=[ 379], 99.90th=[ 2671], 99.95th=[ 3425], 00:15:31.790 | 99.99th=[ 7963] 00:15:31.790 write: IOPS=2088, BW=8356KiB/s (8556kB/s)(8364KiB/1001msec); 0 zone resets 00:15:31.790 slat (usec): min=16, max=116, avg=26.74, stdev= 7.89 00:15:31.790 clat (usec): min=118, max=520, avg=188.91, stdev=34.29 00:15:31.790 lat (usec): min=140, max=541, avg=215.65, stdev=36.33 00:15:31.790 clat percentiles (usec): 00:15:31.790 | 1.00th=[ 131], 5.00th=[ 143], 10.00th=[ 151], 20.00th=[ 161], 00:15:31.790 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 192], 00:15:31.790 | 70.00th=[ 202], 80.00th=[ 215], 90.00th=[ 235], 95.00th=[ 253], 00:15:31.790 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 326], 99.95th=[ 343], 00:15:31.790 | 99.99th=[ 523] 00:15:31.790 bw ( KiB/s): min= 8175, max= 8175, per=28.68%, avg=8175.00, stdev= 0.00, samples=1 00:15:31.790 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:15:31.790 lat (usec) : 250=82.58%, 500=17.20%, 750=0.07% 00:15:31.790 lat (msec) : 2=0.07%, 4=0.05%, 10=0.02% 00:15:31.790 cpu : usr=1.60%, sys=7.40%, ctx=4140, majf=0, minf=11 00:15:31.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:31.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.790 issued rwts: total=2048,2091,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:31.790 00:15:31.790 Run status group 0 (all jobs): 00:15:31.790 READ: bw=24.2MiB/s (25.4MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=24.3MiB (25.4MB), run=1001-1001msec 00:15:31.790 WRITE: bw=27.8MiB/s (29.2MB/s), 4360KiB/s-9646KiB/s (4464kB/s-9878kB/s), io=27.9MiB (29.2MB), run=1001-1001msec 00:15:31.790 00:15:31.790 Disk stats (read/write): 00:15:31.790 nvme0n1: ios=1074/1118, merge=0/0, ticks=478/361, in_queue=839, util=87.47% 00:15:31.790 nvme0n2: ios=844/1024, merge=0/0, ticks=374/416, in_queue=790, util=86.80% 00:15:31.790 nvme0n3: ios=1684/2048, merge=0/0, ticks=370/388, in_queue=758, util=88.87% 00:15:31.790 nvme0n4: ios=1536/1954, merge=0/0, ticks=387/391, in_queue=778, util=88.90% 00:15:31.790 11:59:36 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:31.790 [global] 00:15:31.790 thread=1 00:15:31.791 invalidate=1 00:15:31.791 rw=write 00:15:31.791 time_based=1 00:15:31.791 runtime=1 00:15:31.791 ioengine=libaio 00:15:31.791 direct=1 00:15:31.791 bs=4096 00:15:31.791 iodepth=128 00:15:31.791 norandommap=0 00:15:31.791 numjobs=1 00:15:31.791 00:15:31.791 verify_dump=1 00:15:31.791 verify_backlog=512 00:15:31.791 verify_state_save=0 00:15:31.791 do_verify=1 00:15:31.791 verify=crc32c-intel 00:15:31.791 [job0] 00:15:31.791 filename=/dev/nvme0n1 00:15:31.791 [job1] 00:15:31.791 filename=/dev/nvme0n2 00:15:31.791 [job2] 00:15:31.791 filename=/dev/nvme0n3 00:15:31.791 [job3] 00:15:31.791 filename=/dev/nvme0n4 00:15:31.791 Could not set queue depth (nvme0n1) 00:15:31.791 Could not set queue depth (nvme0n2) 00:15:31.791 Could not set queue depth (nvme0n3) 00:15:31.791 Could not set queue depth (nvme0n4) 00:15:31.791 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:31.791 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:31.791 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:31.791 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:31.791 fio-3.35 00:15:31.791 Starting 4 threads 00:15:33.164 00:15:33.164 job0: (groupid=0, jobs=1): err= 0: pid=76137: Fri Nov 29 11:59:38 2024 00:15:33.164 read: IOPS=2577, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1006msec) 00:15:33.164 slat (usec): min=6, max=7862, avg=182.79, stdev=944.38 00:15:33.164 clat (usec): min=3466, max=32496, avg=23361.60, stdev=4146.55 00:15:33.164 lat (usec): min=6618, max=32511, avg=23544.39, stdev=4071.09 00:15:33.164 clat percentiles (usec): 00:15:33.164 | 1.00th=[ 7046], 5.00th=[19268], 10.00th=[19530], 20.00th=[20317], 00:15:33.164 | 30.00th=[21365], 40.00th=[21890], 50.00th=[22414], 60.00th=[23200], 00:15:33.164 | 70.00th=[23987], 80.00th=[27919], 90.00th=[30016], 95.00th=[31065], 00:15:33.164 | 99.00th=[32375], 99.50th=[32375], 99.90th=[32375], 99.95th=[32375], 00:15:33.164 | 99.99th=[32375] 00:15:33.164 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:15:33.164 slat (usec): min=9, max=9045, avg=164.11, stdev=811.51 00:15:33.164 clat (usec): min=11101, max=35438, avg=21440.35, stdev=5233.13 00:15:33.164 lat (usec): min=11134, max=35497, avg=21604.46, stdev=5204.54 00:15:33.164 clat percentiles (usec): 00:15:33.164 | 1.00th=[11863], 5.00th=[16319], 10.00th=[16909], 20.00th=[17171], 00:15:33.164 | 30.00th=[17957], 40.00th=[19006], 50.00th=[19792], 60.00th=[20841], 00:15:33.164 | 70.00th=[22414], 80.00th=[25297], 90.00th=[30540], 95.00th=[33817], 00:15:33.164 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:15:33.164 | 99.99th=[35390] 00:15:33.164 bw ( KiB/s): min=11776, max=12064, per=24.68%, avg=11920.00, stdev=203.65, samples=2 00:15:33.164 iops : min= 2944, max= 3016, avg=2980.00, stdev=50.91, samples=2 00:15:33.164 lat (msec) : 4=0.02%, 10=0.56%, 20=34.62%, 50=64.80% 00:15:33.164 cpu : usr=2.59%, sys=9.05%, ctx=181, majf=0, minf=9 00:15:33.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:15:33.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.164 issued rwts: total=2593,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.164 job1: (groupid=0, jobs=1): err= 0: pid=76138: Fri Nov 29 11:59:38 2024 00:15:33.164 read: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec) 00:15:33.164 slat (usec): min=4, max=7419, avg=233.67, stdev=1024.43 00:15:33.164 clat (usec): min=17459, max=54805, avg=28299.06, stdev=5237.70 00:15:33.164 lat (usec): min=20699, max=54820, avg=28532.72, stdev=5344.89 00:15:33.164 clat percentiles (usec): 00:15:33.164 | 1.00th=[21103], 5.00th=[21890], 10.00th=[22152], 20.00th=[22938], 00:15:33.164 | 30.00th=[24511], 40.00th=[25822], 50.00th=[27919], 60.00th=[30278], 00:15:33.164 | 70.00th=[30802], 80.00th=[32637], 90.00th=[34341], 95.00th=[35914], 00:15:33.164 | 99.00th=[45876], 99.50th=[50070], 99.90th=[50070], 99.95th=[54789], 00:15:33.165 | 99.99th=[54789] 00:15:33.165 write: IOPS=1974, BW=7897KiB/s (8086kB/s)(7944KiB/1006msec); 0 zone resets 00:15:33.165 slat (usec): min=12, max=11071, avg=318.31, stdev=1118.60 00:15:33.165 clat (usec): min=4573, max=72004, avg=41668.15, stdev=12679.53 00:15:33.165 lat (usec): min=7435, max=72022, avg=41986.47, stdev=12729.16 00:15:33.165 clat percentiles (usec): 00:15:33.165 | 1.00th=[16909], 5.00th=[24773], 10.00th=[25035], 20.00th=[26346], 00:15:33.165 | 30.00th=[33817], 40.00th=[39584], 50.00th=[42730], 60.00th=[44827], 00:15:33.165 | 70.00th=[48497], 80.00th=[52691], 90.00th=[58459], 95.00th=[62653], 00:15:33.165 | 99.00th=[68682], 99.50th=[70779], 99.90th=[71828], 99.95th=[71828], 00:15:33.165 | 99.99th=[71828] 00:15:33.165 bw ( KiB/s): min= 6672, max= 8208, per=15.40%, avg=7440.00, stdev=1086.12, samples=2 00:15:33.165 iops : min= 1668, max= 2052, avg=1860.00, stdev=271.53, samples=2 00:15:33.165 lat (msec) : 10=0.26%, 20=0.48%, 50=83.65%, 100=15.62% 00:15:33.165 cpu : usr=1.00%, sys=4.28%, ctx=271, majf=0, minf=9 00:15:33.165 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:15:33.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.165 issued rwts: total=1536,1986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.165 job2: (groupid=0, jobs=1): err= 0: pid=76139: Fri Nov 29 11:59:38 2024 00:15:33.165 read: IOPS=2910, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec) 00:15:33.165 slat (usec): min=5, max=5748, avg=162.03, stdev=823.42 00:15:33.165 clat (usec): min=501, max=24230, avg=20698.41, stdev=2310.11 00:15:33.165 lat (usec): min=5629, max=24253, avg=20860.44, stdev=2156.55 00:15:33.165 clat percentiles (usec): 00:15:33.165 | 1.00th=[ 6128], 5.00th=[16909], 10.00th=[19792], 20.00th=[20055], 00:15:33.165 | 30.00th=[20579], 40.00th=[20841], 50.00th=[21103], 60.00th=[21365], 00:15:33.165 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22152], 95.00th=[22938], 00:15:33.165 | 99.00th=[23725], 99.50th=[23987], 99.90th=[24249], 99.95th=[24249], 00:15:33.165 | 99.99th=[24249] 00:15:33.165 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:33.165 slat (usec): min=8, max=5495, avg=164.94, stdev=817.23 00:15:33.165 clat (usec): min=15034, max=23715, avg=21410.61, stdev=1295.09 00:15:33.165 lat (usec): min=16544, max=23735, avg=21575.55, stdev=1017.93 00:15:33.165 clat percentiles (usec): 00:15:33.165 | 1.00th=[16450], 5.00th=[19530], 10.00th=[20055], 20.00th=[20579], 00:15:33.165 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21627], 60.00th=[21890], 00:15:33.165 | 70.00th=[22152], 80.00th=[22414], 90.00th=[22938], 95.00th=[23462], 00:15:33.165 | 99.00th=[23725], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:15:33.165 | 99.99th=[23725] 00:15:33.165 bw ( KiB/s): min=12288, max=12288, per=25.44%, avg=12288.00, stdev= 0.00, samples=1 00:15:33.165 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:33.165 lat (usec) : 750=0.02% 00:15:33.165 lat (msec) : 10=0.53%, 20=14.09%, 50=85.36% 00:15:33.165 cpu : usr=3.10%, sys=6.70%, ctx=188, majf=0, minf=5 00:15:33.165 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:15:33.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.165 issued rwts: total=2913,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.165 job3: (groupid=0, jobs=1): err= 0: pid=76140: Fri Nov 29 11:59:38 2024 00:15:33.165 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:15:33.165 slat (usec): min=9, max=4803, avg=125.17, stdev=614.15 00:15:33.165 clat (usec): min=11676, max=20772, avg=16689.00, stdev=1530.46 00:15:33.165 lat (usec): min=14764, max=20786, avg=16814.17, stdev=1409.88 00:15:33.165 clat percentiles (usec): 00:15:33.165 | 1.00th=[12649], 5.00th=[15008], 10.00th=[15401], 20.00th=[15795], 00:15:33.165 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16319], 60.00th=[16581], 00:15:33.165 | 70.00th=[16909], 80.00th=[17433], 90.00th=[19792], 95.00th=[20055], 00:15:33.165 | 99.00th=[20579], 99.50th=[20579], 99.90th=[20841], 99.95th=[20841], 00:15:33.165 | 99.99th=[20841] 00:15:33.165 write: IOPS=4008, BW=15.7MiB/s (16.4MB/s)(15.7MiB/1002msec); 0 zone resets 00:15:33.165 slat (usec): min=10, max=6330, avg=129.02, stdev=585.18 00:15:33.165 clat (usec): min=1192, max=22257, avg=16530.58, stdev=2078.05 00:15:33.165 lat (usec): min=1214, max=22281, avg=16659.59, stdev=2005.13 00:15:33.165 clat percentiles (usec): 00:15:33.165 | 1.00th=[ 5342], 5.00th=[13829], 10.00th=[15401], 20.00th=[15926], 00:15:33.165 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16581], 60.00th=[16909], 00:15:33.165 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17957], 95.00th=[19006], 00:15:33.165 | 99.00th=[21890], 99.50th=[22152], 99.90th=[22152], 99.95th=[22152], 00:15:33.165 | 99.99th=[22152] 00:15:33.165 bw ( KiB/s): min=14736, max=16416, per=32.25%, avg=15576.00, stdev=1187.94, samples=2 00:15:33.165 iops : min= 3684, max= 4104, avg=3894.00, stdev=296.98, samples=2 00:15:33.165 lat (msec) : 2=0.22%, 10=0.84%, 20=95.12%, 50=3.82% 00:15:33.165 cpu : usr=3.40%, sys=12.09%, ctx=238, majf=0, minf=10 00:15:33.165 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:33.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.165 issued rwts: total=3584,4017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.165 00:15:33.165 Run status group 0 (all jobs): 00:15:33.165 READ: bw=41.3MiB/s (43.3MB/s), 6107KiB/s-14.0MiB/s (6254kB/s-14.7MB/s), io=41.5MiB (43.5MB), run=1001-1006msec 00:15:33.165 WRITE: bw=47.2MiB/s (49.5MB/s), 7897KiB/s-15.7MiB/s (8086kB/s-16.4MB/s), io=47.4MiB (49.8MB), run=1001-1006msec 00:15:33.165 00:15:33.165 Disk stats (read/write): 00:15:33.165 nvme0n1: ios=2225/2560, merge=0/0, ticks=12442/12955, in_queue=25397, util=88.26% 00:15:33.165 nvme0n2: ios=1551/1607, merge=0/0, ticks=14475/20661, in_queue=35136, util=87.79% 00:15:33.165 nvme0n3: ios=2560/2592, merge=0/0, ticks=12651/12850, in_queue=25501, util=89.12% 00:15:33.165 nvme0n4: ios=3072/3424, merge=0/0, ticks=11524/12735, in_queue=24259, util=89.68% 00:15:33.165 11:59:38 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:33.165 [global] 00:15:33.165 thread=1 00:15:33.165 invalidate=1 00:15:33.165 rw=randwrite 00:15:33.165 time_based=1 00:15:33.165 runtime=1 00:15:33.165 ioengine=libaio 00:15:33.165 direct=1 00:15:33.165 bs=4096 00:15:33.165 iodepth=128 00:15:33.165 norandommap=0 00:15:33.165 numjobs=1 00:15:33.165 00:15:33.165 verify_dump=1 00:15:33.165 verify_backlog=512 00:15:33.165 verify_state_save=0 00:15:33.165 do_verify=1 00:15:33.165 verify=crc32c-intel 00:15:33.165 [job0] 00:15:33.165 filename=/dev/nvme0n1 00:15:33.165 [job1] 00:15:33.165 filename=/dev/nvme0n2 00:15:33.165 [job2] 00:15:33.165 filename=/dev/nvme0n3 00:15:33.165 [job3] 00:15:33.165 filename=/dev/nvme0n4 00:15:33.165 Could not set queue depth (nvme0n1) 00:15:33.165 Could not set queue depth (nvme0n2) 00:15:33.165 Could not set queue depth (nvme0n3) 00:15:33.165 Could not set queue depth (nvme0n4) 00:15:33.165 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:33.165 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:33.165 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:33.165 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:33.165 fio-3.35 00:15:33.165 Starting 4 threads 00:15:34.559 00:15:34.559 job0: (groupid=0, jobs=1): err= 0: pid=76199: Fri Nov 29 11:59:39 2024 00:15:34.559 read: IOPS=2026, BW=8107KiB/s (8301kB/s)(8188KiB/1010msec) 00:15:34.559 slat (usec): min=7, max=14910, avg=225.87, stdev=1265.29 00:15:34.559 clat (usec): min=2291, max=47104, avg=30974.14, stdev=4777.31 00:15:34.559 lat (usec): min=12595, max=47134, avg=31200.01, stdev=4755.34 00:15:34.559 clat percentiles (usec): 00:15:34.559 | 1.00th=[12911], 5.00th=[22938], 10.00th=[26608], 20.00th=[28705], 00:15:34.559 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30802], 60.00th=[31851], 00:15:34.559 | 70.00th=[32637], 80.00th=[32900], 90.00th=[36963], 95.00th=[40109], 00:15:34.559 | 99.00th=[45351], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:15:34.559 | 99.99th=[46924] 00:15:34.559 write: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec); 0 zone resets 00:15:34.559 slat (usec): min=13, max=21487, avg=257.43, stdev=1754.86 00:15:34.559 clat (usec): min=12764, max=55626, avg=30870.93, stdev=3994.91 00:15:34.559 lat (usec): min=12821, max=55677, avg=31128.36, stdev=4275.84 00:15:34.559 clat percentiles (usec): 00:15:34.559 | 1.00th=[15270], 5.00th=[24511], 10.00th=[26870], 20.00th=[28705], 00:15:34.559 | 30.00th=[29492], 40.00th=[30278], 50.00th=[31065], 60.00th=[31589], 00:15:34.559 | 70.00th=[32375], 80.00th=[32900], 90.00th=[34866], 95.00th=[36963], 00:15:34.559 | 99.00th=[44303], 99.50th=[45876], 99.90th=[47449], 99.95th=[49021], 00:15:34.559 | 99.99th=[55837] 00:15:34.559 bw ( KiB/s): min= 8192, max= 8208, per=17.47%, avg=8200.00, stdev=11.31, samples=2 00:15:34.559 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:15:34.559 lat (msec) : 4=0.02%, 20=1.61%, 50=98.34%, 100=0.02% 00:15:34.559 cpu : usr=2.58%, sys=5.55%, ctx=124, majf=0, minf=9 00:15:34.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:15:34.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:34.559 issued rwts: total=2047,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.559 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:34.559 job1: (groupid=0, jobs=1): err= 0: pid=76202: Fri Nov 29 11:59:39 2024 00:15:34.559 read: IOPS=3508, BW=13.7MiB/s (14.4MB/s)(13.7MiB/1003msec) 00:15:34.559 slat (usec): min=7, max=14784, avg=131.63, stdev=899.96 00:15:34.559 clat (usec): min=291, max=30713, avg=17926.09, stdev=2799.23 00:15:34.559 lat (usec): min=8560, max=36299, avg=18057.72, stdev=2810.20 00:15:34.559 clat percentiles (usec): 00:15:34.559 | 1.00th=[ 8979], 5.00th=[11469], 10.00th=[16188], 20.00th=[17171], 00:15:34.559 | 30.00th=[17433], 40.00th=[17695], 50.00th=[18220], 60.00th=[18482], 00:15:34.559 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19792], 95.00th=[20055], 00:15:34.559 | 99.00th=[27919], 99.50th=[28967], 99.90th=[30802], 99.95th=[30802], 00:15:34.559 | 99.99th=[30802] 00:15:34.559 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:15:34.559 slat (usec): min=4, max=17057, avg=141.65, stdev=923.88 00:15:34.559 clat (usec): min=8484, max=29068, avg=17855.07, stdev=2269.98 00:15:34.559 lat (usec): min=11908, max=29093, avg=17996.72, stdev=2121.79 00:15:34.559 clat percentiles (usec): 00:15:34.559 | 1.00th=[11207], 5.00th=[15139], 10.00th=[15664], 20.00th=[16319], 00:15:34.559 | 30.00th=[16909], 40.00th=[17433], 50.00th=[17957], 60.00th=[18220], 00:15:34.559 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19792], 95.00th=[20055], 00:15:34.559 | 99.00th=[28443], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:15:34.559 | 99.99th=[28967] 00:15:34.559 bw ( KiB/s): min=13640, max=15032, per=30.54%, avg=14336.00, stdev=984.29, samples=2 00:15:34.559 iops : min= 3410, max= 3758, avg=3584.00, stdev=246.07, samples=2 00:15:34.559 lat (usec) : 500=0.01% 00:15:34.559 lat (msec) : 10=1.51%, 20=92.82%, 50=5.66% 00:15:34.559 cpu : usr=4.09%, sys=9.78%, ctx=147, majf=0, minf=5 00:15:34.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:34.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:34.559 issued rwts: total=3519,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.559 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:34.559 job2: (groupid=0, jobs=1): err= 0: pid=76205: Fri Nov 29 11:59:39 2024 00:15:34.559 read: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec) 00:15:34.559 slat (usec): min=9, max=30290, avg=247.14, stdev=1836.88 00:15:34.559 clat (usec): min=19581, max=56193, avg=33010.76, stdev=4813.88 00:15:34.560 lat (usec): min=19592, max=58289, avg=33257.90, stdev=4890.96 00:15:34.560 clat percentiles (usec): 00:15:34.560 | 1.00th=[20841], 5.00th=[26608], 10.00th=[29230], 20.00th=[29754], 00:15:34.560 | 30.00th=[30278], 40.00th=[31327], 50.00th=[32637], 60.00th=[33162], 00:15:34.560 | 70.00th=[33817], 80.00th=[36439], 90.00th=[40633], 95.00th=[43779], 00:15:34.560 | 99.00th=[44303], 99.50th=[44303], 99.90th=[49546], 99.95th=[55837], 00:15:34.560 | 99.99th=[56361] 00:15:34.560 write: IOPS=2105, BW=8424KiB/s (8626kB/s)(8508KiB/1010msec); 0 zone resets 00:15:34.560 slat (usec): min=4, max=21270, avg=229.87, stdev=1647.75 00:15:34.560 clat (usec): min=2545, max=44461, avg=28401.31, stdev=5690.64 00:15:34.560 lat (usec): min=11185, max=44489, avg=28631.18, stdev=5501.65 00:15:34.560 clat percentiles (usec): 00:15:34.560 | 1.00th=[11600], 5.00th=[17171], 10.00th=[20055], 20.00th=[23987], 00:15:34.560 | 30.00th=[27919], 40.00th=[28967], 50.00th=[30016], 60.00th=[31065], 00:15:34.560 | 70.00th=[31851], 80.00th=[32375], 90.00th=[32900], 95.00th=[34341], 00:15:34.560 | 99.00th=[39584], 99.50th=[39584], 99.90th=[39584], 99.95th=[43254], 00:15:34.560 | 99.99th=[44303] 00:15:34.560 bw ( KiB/s): min= 8192, max= 8192, per=17.45%, avg=8192.00, stdev= 0.00, samples=2 00:15:34.560 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:15:34.560 lat (msec) : 4=0.02%, 20=6.08%, 50=93.84%, 100=0.05% 00:15:34.560 cpu : usr=1.29%, sys=4.56%, ctx=96, majf=0, minf=3 00:15:34.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:15:34.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:34.560 issued rwts: total=2048,2127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:34.560 job3: (groupid=0, jobs=1): err= 0: pid=76206: Fri Nov 29 11:59:39 2024 00:15:34.560 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:15:34.560 slat (usec): min=7, max=8582, avg=121.54, stdev=807.92 00:15:34.560 clat (usec): min=8724, max=29431, avg=16639.95, stdev=2264.41 00:15:34.560 lat (usec): min=8746, max=34286, avg=16761.48, stdev=2300.48 00:15:34.560 clat percentiles (usec): 00:15:34.560 | 1.00th=[ 9503], 5.00th=[14484], 10.00th=[14877], 20.00th=[15401], 00:15:34.560 | 30.00th=[15664], 40.00th=[16057], 50.00th=[16450], 60.00th=[16909], 00:15:34.560 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18744], 95.00th=[19268], 00:15:34.560 | 99.00th=[25560], 99.50th=[26870], 99.90th=[29492], 99.95th=[29492], 00:15:34.560 | 99.99th=[29492] 00:15:34.560 write: IOPS=4074, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:15:34.560 slat (usec): min=4, max=18190, avg=130.96, stdev=858.76 00:15:34.560 clat (usec): min=288, max=29462, avg=16539.47, stdev=2727.91 00:15:34.560 lat (usec): min=5494, max=29487, avg=16670.44, stdev=2629.75 00:15:34.560 clat percentiles (usec): 00:15:34.560 | 1.00th=[ 6652], 5.00th=[13435], 10.00th=[14746], 20.00th=[15270], 00:15:34.560 | 30.00th=[15533], 40.00th=[15926], 50.00th=[16319], 60.00th=[16909], 00:15:34.560 | 70.00th=[17433], 80.00th=[17957], 90.00th=[19006], 95.00th=[20055], 00:15:34.560 | 99.00th=[28705], 99.50th=[28967], 99.90th=[29492], 99.95th=[29492], 00:15:34.560 | 99.99th=[29492] 00:15:34.560 bw ( KiB/s): min=15352, max=16384, per=33.80%, avg=15868.00, stdev=729.73, samples=2 00:15:34.560 iops : min= 3838, max= 4096, avg=3967.00, stdev=182.43, samples=2 00:15:34.560 lat (usec) : 500=0.01% 00:15:34.560 lat (msec) : 10=1.95%, 20=94.27%, 50=3.76% 00:15:34.560 cpu : usr=3.49%, sys=11.45%, ctx=162, majf=0, minf=5 00:15:34.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:34.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:34.560 issued rwts: total=3584,4095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:34.560 00:15:34.560 Run status group 0 (all jobs): 00:15:34.560 READ: bw=43.3MiB/s (45.4MB/s), 8107KiB/s-13.9MiB/s (8301kB/s-14.6MB/s), io=43.7MiB (45.9MB), run=1003-1010msec 00:15:34.560 WRITE: bw=45.8MiB/s (48.1MB/s), 8111KiB/s-15.9MiB/s (8306kB/s-16.7MB/s), io=46.3MiB (48.6MB), run=1003-1010msec 00:15:34.560 00:15:34.560 Disk stats (read/write): 00:15:34.560 nvme0n1: ios=1586/1898, merge=0/0, ticks=23308/28183, in_queue=51491, util=88.48% 00:15:34.560 nvme0n2: ios=2987/3072, merge=0/0, ticks=50640/51569, in_queue=102209, util=88.68% 00:15:34.560 nvme0n3: ios=1553/1984, merge=0/0, ticks=49043/55631, in_queue=104674, util=89.15% 00:15:34.560 nvme0n4: ios=3072/3392, merge=0/0, ticks=48558/53728, in_queue=102286, util=89.71% 00:15:34.560 11:59:39 -- target/fio.sh@55 -- # sync 00:15:34.560 11:59:39 -- target/fio.sh@59 -- # fio_pid=76220 00:15:34.560 11:59:39 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:34.560 11:59:39 -- target/fio.sh@61 -- # sleep 3 00:15:34.560 [global] 00:15:34.560 thread=1 00:15:34.560 invalidate=1 00:15:34.560 rw=read 00:15:34.560 time_based=1 00:15:34.560 runtime=10 00:15:34.560 ioengine=libaio 00:15:34.560 direct=1 00:15:34.560 bs=4096 00:15:34.560 iodepth=1 00:15:34.560 norandommap=1 00:15:34.560 numjobs=1 00:15:34.560 00:15:34.560 [job0] 00:15:34.560 filename=/dev/nvme0n1 00:15:34.560 [job1] 00:15:34.560 filename=/dev/nvme0n2 00:15:34.560 [job2] 00:15:34.560 filename=/dev/nvme0n3 00:15:34.560 [job3] 00:15:34.560 filename=/dev/nvme0n4 00:15:34.560 Could not set queue depth (nvme0n1) 00:15:34.560 Could not set queue depth (nvme0n2) 00:15:34.560 Could not set queue depth (nvme0n3) 00:15:34.560 Could not set queue depth (nvme0n4) 00:15:34.560 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:34.560 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:34.560 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:34.560 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:34.560 fio-3.35 00:15:34.560 Starting 4 threads 00:15:37.842 11:59:42 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:37.842 fio: pid=76263, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:37.842 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=28667904, buflen=4096 00:15:37.842 11:59:43 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:37.842 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=37826560, buflen=4096 00:15:37.842 fio: pid=76262, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:37.842 11:59:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:37.842 11:59:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:38.101 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=41951232, buflen=4096 00:15:38.101 fio: pid=76260, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:38.101 11:59:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:38.101 11:59:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:38.668 fio: pid=76261, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:38.668 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=42024960, buflen=4096 00:15:38.668 00:15:38.668 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76260: Fri Nov 29 11:59:43 2024 00:15:38.668 read: IOPS=2982, BW=11.7MiB/s (12.2MB/s)(40.0MiB/3434msec) 00:15:38.668 slat (usec): min=10, max=11008, avg=23.78, stdev=179.39 00:15:38.668 clat (usec): min=128, max=3989, avg=309.14, stdev=77.64 00:15:38.668 lat (usec): min=139, max=11454, avg=332.92, stdev=196.02 00:15:38.668 clat percentiles (usec): 00:15:38.668 | 1.00th=[ 163], 5.00th=[ 192], 10.00th=[ 239], 20.00th=[ 269], 00:15:38.668 | 30.00th=[ 281], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 322], 00:15:38.668 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 375], 95.00th=[ 400], 00:15:38.668 | 99.00th=[ 465], 99.50th=[ 523], 99.90th=[ 979], 99.95th=[ 1254], 00:15:38.668 | 99.99th=[ 1713] 00:15:38.668 bw ( KiB/s): min=10880, max=11784, per=30.34%, avg=11544.00, stdev=347.65, samples=6 00:15:38.668 iops : min= 2720, max= 2946, avg=2886.00, stdev=86.91, samples=6 00:15:38.668 lat (usec) : 250=11.68%, 500=87.72%, 750=0.42%, 1000=0.09% 00:15:38.668 lat (msec) : 2=0.08%, 4=0.01% 00:15:38.668 cpu : usr=1.54%, sys=5.30%, ctx=10267, majf=0, minf=1 00:15:38.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.668 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.668 issued rwts: total=10243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:38.668 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76261: Fri Nov 29 11:59:43 2024 00:15:38.668 read: IOPS=2656, BW=10.4MiB/s (10.9MB/s)(40.1MiB/3862msec) 00:15:38.668 slat (usec): min=10, max=15780, avg=25.58, stdev=283.39 00:15:38.668 clat (usec): min=113, max=7807, avg=348.78, stdev=129.30 00:15:38.668 lat (usec): min=159, max=16009, avg=374.36, stdev=309.95 00:15:38.668 clat percentiles (usec): 00:15:38.668 | 1.00th=[ 169], 5.00th=[ 190], 10.00th=[ 206], 20.00th=[ 239], 00:15:38.668 | 30.00th=[ 289], 40.00th=[ 351], 50.00th=[ 367], 60.00th=[ 388], 00:15:38.668 | 70.00th=[ 404], 80.00th=[ 424], 90.00th=[ 453], 95.00th=[ 474], 00:15:38.668 | 99.00th=[ 519], 99.50th=[ 545], 99.90th=[ 1090], 99.95th=[ 1647], 00:15:38.668 | 99.99th=[ 3720] 00:15:38.668 bw ( KiB/s): min= 9088, max=13784, per=26.35%, avg=10026.29, stdev=1675.42, samples=7 00:15:38.668 iops : min= 2272, max= 3446, avg=2506.57, stdev=418.86, samples=7 00:15:38.668 lat (usec) : 250=23.07%, 500=74.74%, 750=2.04%, 1000=0.04% 00:15:38.668 lat (msec) : 2=0.07%, 4=0.03%, 10=0.01% 00:15:38.668 cpu : usr=1.09%, sys=4.56%, ctx=10273, majf=0, minf=2 00:15:38.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.668 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.668 issued rwts: total=10261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:38.669 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76262: Fri Nov 29 11:59:43 2024 00:15:38.669 read: IOPS=2887, BW=11.3MiB/s (11.8MB/s)(36.1MiB/3199msec) 00:15:38.669 slat (usec): min=12, max=9532, avg=22.96, stdev=125.83 00:15:38.669 clat (usec): min=155, max=3379, avg=321.39, stdev=77.89 00:15:38.669 lat (usec): min=169, max=9802, avg=344.35, stdev=147.94 00:15:38.669 clat percentiles (usec): 00:15:38.669 | 1.00th=[ 221], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 281], 00:15:38.669 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 326], 00:15:38.669 | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 379], 95.00th=[ 404], 00:15:38.669 | 99.00th=[ 482], 99.50th=[ 529], 99.90th=[ 938], 99.95th=[ 1663], 00:15:38.669 | 99.99th=[ 3392] 00:15:38.669 bw ( KiB/s): min=11440, max=11712, per=30.50%, avg=11605.33, stdev=95.42, samples=6 00:15:38.669 iops : min= 2860, max= 2928, avg=2901.33, stdev=23.86, samples=6 00:15:38.669 lat (usec) : 250=2.23%, 500=97.00%, 750=0.56%, 1000=0.10% 00:15:38.669 lat (msec) : 2=0.05%, 4=0.04% 00:15:38.669 cpu : usr=1.25%, sys=5.22%, ctx=9241, majf=0, minf=2 00:15:38.669 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.669 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.669 issued rwts: total=9236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.669 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:38.669 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76263: Fri Nov 29 11:59:43 2024 00:15:38.669 read: IOPS=2364, BW=9455KiB/s (9682kB/s)(27.3MiB/2961msec) 00:15:38.669 slat (usec): min=10, max=240, avg=24.19, stdev= 8.58 00:15:38.669 clat (usec): min=225, max=3252, avg=396.15, stdev=61.45 00:15:38.669 lat (usec): min=244, max=3294, avg=420.34, stdev=62.04 00:15:38.669 clat percentiles (usec): 00:15:38.669 | 1.00th=[ 306], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 355], 00:15:38.669 | 30.00th=[ 367], 40.00th=[ 379], 50.00th=[ 392], 60.00th=[ 408], 00:15:38.669 | 70.00th=[ 420], 80.00th=[ 433], 90.00th=[ 457], 95.00th=[ 478], 00:15:38.669 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 578], 99.95th=[ 734], 00:15:38.669 | 99.99th=[ 3261] 00:15:38.669 bw ( KiB/s): min= 9096, max= 9736, per=25.02%, avg=9518.40, stdev=259.69, samples=5 00:15:38.669 iops : min= 2274, max= 2434, avg=2379.60, stdev=64.92, samples=5 00:15:38.669 lat (usec) : 250=0.21%, 500=97.77%, 750=1.96%, 1000=0.01% 00:15:38.669 lat (msec) : 4=0.03% 00:15:38.669 cpu : usr=1.42%, sys=5.10%, ctx=7001, majf=0, minf=2 00:15:38.669 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.669 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.669 issued rwts: total=7000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.669 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:38.669 00:15:38.669 Run status group 0 (all jobs): 00:15:38.669 READ: bw=37.2MiB/s (39.0MB/s), 9455KiB/s-11.7MiB/s (9682kB/s-12.2MB/s), io=144MiB (150MB), run=2961-3862msec 00:15:38.669 00:15:38.669 Disk stats (read/write): 00:15:38.669 nvme0n1: ios=9949/0, merge=0/0, ticks=3127/0, in_queue=3127, util=95.48% 00:15:38.669 nvme0n2: ios=9159/0, merge=0/0, ticks=3232/0, in_queue=3232, util=94.97% 00:15:38.669 nvme0n3: ios=8978/0, merge=0/0, ticks=2905/0, in_queue=2905, util=96.21% 00:15:38.669 nvme0n4: ios=6786/0, merge=0/0, ticks=2679/0, in_queue=2679, util=96.83% 00:15:38.669 11:59:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:38.669 11:59:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:38.928 11:59:44 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:38.928 11:59:44 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:39.186 11:59:44 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:39.186 11:59:44 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:39.445 11:59:44 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:39.445 11:59:44 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:39.704 11:59:45 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:39.704 11:59:45 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:40.350 11:59:45 -- target/fio.sh@69 -- # fio_status=0 00:15:40.350 11:59:45 -- target/fio.sh@70 -- # wait 76220 00:15:40.350 11:59:45 -- target/fio.sh@70 -- # fio_status=4 00:15:40.350 11:59:45 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:40.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.350 11:59:45 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:40.350 11:59:45 -- common/autotest_common.sh@1208 -- # local i=0 00:15:40.350 11:59:45 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.350 11:59:45 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:40.350 11:59:45 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:40.350 11:59:45 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.350 11:59:45 -- common/autotest_common.sh@1220 -- # return 0 00:15:40.350 11:59:45 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:40.350 nvmf hotplug test: fio failed as expected 00:15:40.350 11:59:45 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:40.350 11:59:45 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:40.609 11:59:45 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:40.609 11:59:45 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:40.609 11:59:45 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:40.609 11:59:45 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:40.609 11:59:45 -- target/fio.sh@91 -- # nvmftestfini 00:15:40.609 11:59:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:40.609 11:59:45 -- nvmf/common.sh@116 -- # sync 00:15:40.609 11:59:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:40.609 11:59:45 -- nvmf/common.sh@119 -- # set +e 00:15:40.609 11:59:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:40.609 11:59:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:40.609 rmmod nvme_tcp 00:15:40.609 rmmod nvme_fabrics 00:15:40.609 rmmod nvme_keyring 00:15:40.609 11:59:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:40.609 11:59:45 -- nvmf/common.sh@123 -- # set -e 00:15:40.609 11:59:45 -- nvmf/common.sh@124 -- # return 0 00:15:40.609 11:59:45 -- nvmf/common.sh@477 -- # '[' -n 75831 ']' 00:15:40.609 11:59:45 -- nvmf/common.sh@478 -- # killprocess 75831 00:15:40.609 11:59:45 -- common/autotest_common.sh@936 -- # '[' -z 75831 ']' 00:15:40.609 11:59:45 -- common/autotest_common.sh@940 -- # kill -0 75831 00:15:40.609 11:59:45 -- common/autotest_common.sh@941 -- # uname 00:15:40.609 11:59:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:40.609 11:59:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75831 00:15:40.609 killing process with pid 75831 00:15:40.609 11:59:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:40.609 11:59:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:40.609 11:59:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75831' 00:15:40.609 11:59:46 -- common/autotest_common.sh@955 -- # kill 75831 00:15:40.609 11:59:46 -- common/autotest_common.sh@960 -- # wait 75831 00:15:40.867 11:59:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:40.867 11:59:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:40.867 11:59:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:40.867 11:59:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:40.867 11:59:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:40.867 11:59:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.867 11:59:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.867 11:59:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.867 11:59:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:40.867 ************************************ 00:15:40.867 END TEST nvmf_fio_target 00:15:40.867 ************************************ 00:15:40.867 00:15:40.867 real 0m20.550s 00:15:40.867 user 1m18.782s 00:15:40.867 sys 0m9.506s 00:15:40.867 11:59:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:40.867 11:59:46 -- common/autotest_common.sh@10 -- # set +x 00:15:41.126 11:59:46 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:41.126 11:59:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:41.126 11:59:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.126 11:59:46 -- common/autotest_common.sh@10 -- # set +x 00:15:41.126 ************************************ 00:15:41.126 START TEST nvmf_bdevio 00:15:41.126 ************************************ 00:15:41.126 11:59:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:41.126 * Looking for test storage... 00:15:41.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:41.126 11:59:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:41.126 11:59:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:41.126 11:59:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:41.126 11:59:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:41.126 11:59:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:41.126 11:59:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:41.126 11:59:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:41.126 11:59:46 -- scripts/common.sh@335 -- # IFS=.-: 00:15:41.126 11:59:46 -- scripts/common.sh@335 -- # read -ra ver1 00:15:41.126 11:59:46 -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.126 11:59:46 -- scripts/common.sh@336 -- # read -ra ver2 00:15:41.126 11:59:46 -- scripts/common.sh@337 -- # local 'op=<' 00:15:41.126 11:59:46 -- scripts/common.sh@339 -- # ver1_l=2 00:15:41.126 11:59:46 -- scripts/common.sh@340 -- # ver2_l=1 00:15:41.126 11:59:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:41.126 11:59:46 -- scripts/common.sh@343 -- # case "$op" in 00:15:41.126 11:59:46 -- scripts/common.sh@344 -- # : 1 00:15:41.126 11:59:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:41.126 11:59:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.127 11:59:46 -- scripts/common.sh@364 -- # decimal 1 00:15:41.127 11:59:46 -- scripts/common.sh@352 -- # local d=1 00:15:41.127 11:59:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.127 11:59:46 -- scripts/common.sh@354 -- # echo 1 00:15:41.127 11:59:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:41.127 11:59:46 -- scripts/common.sh@365 -- # decimal 2 00:15:41.127 11:59:46 -- scripts/common.sh@352 -- # local d=2 00:15:41.127 11:59:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.127 11:59:46 -- scripts/common.sh@354 -- # echo 2 00:15:41.127 11:59:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:41.127 11:59:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:41.127 11:59:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:41.127 11:59:46 -- scripts/common.sh@367 -- # return 0 00:15:41.127 11:59:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.127 11:59:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:41.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.127 --rc genhtml_branch_coverage=1 00:15:41.127 --rc genhtml_function_coverage=1 00:15:41.127 --rc genhtml_legend=1 00:15:41.127 --rc geninfo_all_blocks=1 00:15:41.127 --rc geninfo_unexecuted_blocks=1 00:15:41.127 00:15:41.127 ' 00:15:41.127 11:59:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:41.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.127 --rc genhtml_branch_coverage=1 00:15:41.127 --rc genhtml_function_coverage=1 00:15:41.127 --rc genhtml_legend=1 00:15:41.127 --rc geninfo_all_blocks=1 00:15:41.127 --rc geninfo_unexecuted_blocks=1 00:15:41.127 00:15:41.127 ' 00:15:41.127 11:59:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:41.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.127 --rc genhtml_branch_coverage=1 00:15:41.127 --rc genhtml_function_coverage=1 00:15:41.127 --rc genhtml_legend=1 00:15:41.127 --rc geninfo_all_blocks=1 00:15:41.127 --rc geninfo_unexecuted_blocks=1 00:15:41.127 00:15:41.127 ' 00:15:41.127 11:59:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:41.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.127 --rc genhtml_branch_coverage=1 00:15:41.127 --rc genhtml_function_coverage=1 00:15:41.127 --rc genhtml_legend=1 00:15:41.127 --rc geninfo_all_blocks=1 00:15:41.127 --rc geninfo_unexecuted_blocks=1 00:15:41.127 00:15:41.127 ' 00:15:41.127 11:59:46 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:41.127 11:59:46 -- nvmf/common.sh@7 -- # uname -s 00:15:41.127 11:59:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.127 11:59:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.127 11:59:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.127 11:59:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.127 11:59:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.127 11:59:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.127 11:59:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.127 11:59:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.127 11:59:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.127 11:59:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.127 11:59:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:15:41.127 11:59:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:15:41.127 11:59:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.127 11:59:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.127 11:59:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:41.127 11:59:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:41.127 11:59:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.127 11:59:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.127 11:59:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.127 11:59:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.127 11:59:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.127 11:59:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.127 11:59:46 -- paths/export.sh@5 -- # export PATH 00:15:41.127 11:59:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.127 11:59:46 -- nvmf/common.sh@46 -- # : 0 00:15:41.127 11:59:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:41.127 11:59:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:41.127 11:59:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:41.127 11:59:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.127 11:59:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.127 11:59:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:41.127 11:59:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:41.127 11:59:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:41.127 11:59:46 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:41.127 11:59:46 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:41.127 11:59:46 -- target/bdevio.sh@14 -- # nvmftestinit 00:15:41.127 11:59:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:41.127 11:59:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.127 11:59:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:41.127 11:59:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:41.127 11:59:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:41.127 11:59:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.127 11:59:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.127 11:59:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.386 11:59:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:41.386 11:59:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:41.386 11:59:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:41.386 11:59:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:41.386 11:59:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:41.386 11:59:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:41.386 11:59:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.386 11:59:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:41.386 11:59:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:41.386 11:59:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:41.386 11:59:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:41.386 11:59:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:41.386 11:59:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:41.386 11:59:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.386 11:59:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:41.386 11:59:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:41.386 11:59:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:41.386 11:59:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:41.386 11:59:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:41.386 11:59:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:41.386 Cannot find device "nvmf_tgt_br" 00:15:41.386 11:59:46 -- nvmf/common.sh@154 -- # true 00:15:41.386 11:59:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.386 Cannot find device "nvmf_tgt_br2" 00:15:41.386 11:59:46 -- nvmf/common.sh@155 -- # true 00:15:41.386 11:59:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:41.386 11:59:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:41.386 Cannot find device "nvmf_tgt_br" 00:15:41.386 11:59:46 -- nvmf/common.sh@157 -- # true 00:15:41.386 11:59:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:41.386 Cannot find device "nvmf_tgt_br2" 00:15:41.386 11:59:46 -- nvmf/common.sh@158 -- # true 00:15:41.386 11:59:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:41.386 11:59:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:41.386 11:59:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.386 11:59:46 -- nvmf/common.sh@161 -- # true 00:15:41.386 11:59:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.386 11:59:46 -- nvmf/common.sh@162 -- # true 00:15:41.387 11:59:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:41.387 11:59:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:41.387 11:59:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:41.387 11:59:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:41.387 11:59:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:41.387 11:59:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:41.387 11:59:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:41.387 11:59:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:41.387 11:59:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:41.387 11:59:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:41.387 11:59:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:41.387 11:59:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:41.387 11:59:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:41.387 11:59:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.387 11:59:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:41.387 11:59:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:41.387 11:59:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:41.646 11:59:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:41.646 11:59:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:41.646 11:59:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:41.646 11:59:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:41.646 11:59:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:41.646 11:59:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:41.646 11:59:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:41.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:15:41.646 00:15:41.646 --- 10.0.0.2 ping statistics --- 00:15:41.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.646 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:15:41.646 11:59:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:41.646 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:41.646 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:15:41.646 00:15:41.646 --- 10.0.0.3 ping statistics --- 00:15:41.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.646 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:41.646 11:59:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:41.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:41.646 00:15:41.646 --- 10.0.0.1 ping statistics --- 00:15:41.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.646 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:41.646 11:59:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.646 11:59:46 -- nvmf/common.sh@421 -- # return 0 00:15:41.646 11:59:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:41.646 11:59:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.646 11:59:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:41.646 11:59:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:41.646 11:59:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.646 11:59:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:41.646 11:59:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:41.646 11:59:46 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:41.646 11:59:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:41.646 11:59:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:41.646 11:59:46 -- common/autotest_common.sh@10 -- # set +x 00:15:41.646 11:59:46 -- nvmf/common.sh@469 -- # nvmfpid=76538 00:15:41.646 11:59:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:41.646 11:59:47 -- nvmf/common.sh@470 -- # waitforlisten 76538 00:15:41.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.646 11:59:47 -- common/autotest_common.sh@829 -- # '[' -z 76538 ']' 00:15:41.646 11:59:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.646 11:59:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.646 11:59:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.646 11:59:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.646 11:59:47 -- common/autotest_common.sh@10 -- # set +x 00:15:41.646 [2024-11-29 11:59:47.051413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:41.646 [2024-11-29 11:59:47.051721] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.907 [2024-11-29 11:59:47.190330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:41.908 [2024-11-29 11:59:47.296559] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:41.908 [2024-11-29 11:59:47.296741] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.908 [2024-11-29 11:59:47.296758] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.908 [2024-11-29 11:59:47.296769] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.908 [2024-11-29 11:59:47.296859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:41.908 [2024-11-29 11:59:47.297749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:41.908 [2024-11-29 11:59:47.297847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:41.908 [2024-11-29 11:59:47.297855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.844 11:59:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.844 11:59:48 -- common/autotest_common.sh@862 -- # return 0 00:15:42.844 11:59:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:42.844 11:59:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:42.844 11:59:48 -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 11:59:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.844 11:59:48 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:42.844 11:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.844 11:59:48 -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 [2024-11-29 11:59:48.130298] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.844 11:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.844 11:59:48 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:42.844 11:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.844 11:59:48 -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 Malloc0 00:15:42.844 11:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.844 11:59:48 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:42.844 11:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.844 11:59:48 -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 11:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.844 11:59:48 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:42.844 11:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.844 11:59:48 -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 11:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.844 11:59:48 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.844 11:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.844 11:59:48 -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 [2024-11-29 11:59:48.205554] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.844 11:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.844 11:59:48 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:42.844 11:59:48 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:42.844 11:59:48 -- nvmf/common.sh@520 -- # config=() 00:15:42.844 11:59:48 -- nvmf/common.sh@520 -- # local subsystem config 00:15:42.844 11:59:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:42.844 11:59:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:42.844 { 00:15:42.844 "params": { 00:15:42.844 "name": "Nvme$subsystem", 00:15:42.844 "trtype": "$TEST_TRANSPORT", 00:15:42.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:42.844 "adrfam": "ipv4", 00:15:42.844 "trsvcid": "$NVMF_PORT", 00:15:42.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:42.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:42.844 "hdgst": ${hdgst:-false}, 00:15:42.844 "ddgst": ${ddgst:-false} 00:15:42.844 }, 00:15:42.844 "method": "bdev_nvme_attach_controller" 00:15:42.844 } 00:15:42.844 EOF 00:15:42.844 )") 00:15:42.844 11:59:48 -- nvmf/common.sh@542 -- # cat 00:15:42.844 11:59:48 -- nvmf/common.sh@544 -- # jq . 00:15:42.844 11:59:48 -- nvmf/common.sh@545 -- # IFS=, 00:15:42.844 11:59:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:42.844 "params": { 00:15:42.844 "name": "Nvme1", 00:15:42.845 "trtype": "tcp", 00:15:42.845 "traddr": "10.0.0.2", 00:15:42.845 "adrfam": "ipv4", 00:15:42.845 "trsvcid": "4420", 00:15:42.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:42.845 "hdgst": false, 00:15:42.845 "ddgst": false 00:15:42.845 }, 00:15:42.845 "method": "bdev_nvme_attach_controller" 00:15:42.845 }' 00:15:42.845 [2024-11-29 11:59:48.268676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:42.845 [2024-11-29 11:59:48.268788] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76574 ] 00:15:43.104 [2024-11-29 11:59:48.418591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:43.104 [2024-11-29 11:59:48.553772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.104 [2024-11-29 11:59:48.553873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.104 [2024-11-29 11:59:48.553891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.364 [2024-11-29 11:59:48.756836] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:15:43.364 [2024-11-29 11:59:48.757204] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:15:43.364 I/O targets: 00:15:43.364 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:43.364 00:15:43.364 00:15:43.364 CUnit - A unit testing framework for C - Version 2.1-3 00:15:43.364 http://cunit.sourceforge.net/ 00:15:43.364 00:15:43.364 00:15:43.364 Suite: bdevio tests on: Nvme1n1 00:15:43.364 Test: blockdev write read block ...passed 00:15:43.364 Test: blockdev write zeroes read block ...passed 00:15:43.364 Test: blockdev write zeroes read no split ...passed 00:15:43.364 Test: blockdev write zeroes read split ...passed 00:15:43.364 Test: blockdev write zeroes read split partial ...passed 00:15:43.364 Test: blockdev reset ...[2024-11-29 11:59:48.791195] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:43.364 [2024-11-29 11:59:48.791788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177d2a0 (9): Bad file descriptor 00:15:43.364 [2024-11-29 11:59:48.806181] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:43.364 passed 00:15:43.364 Test: blockdev write read 8 blocks ...passed 00:15:43.364 Test: blockdev write read size > 128k ...passed 00:15:43.364 Test: blockdev write read invalid size ...passed 00:15:43.364 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:43.364 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:43.364 Test: blockdev write read max offset ...passed 00:15:43.364 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:43.364 Test: blockdev writev readv 8 blocks ...passed 00:15:43.364 Test: blockdev writev readv 30 x 1block ...passed 00:15:43.364 Test: blockdev writev readv block ...passed 00:15:43.364 Test: blockdev writev readv size > 128k ...passed 00:15:43.364 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:43.364 Test: blockdev comparev and writev ...[2024-11-29 11:59:48.815163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.364 [2024-11-29 11:59:48.815228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.364 [2024-11-29 11:59:48.815251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.364 [2024-11-29 11:59:48.815262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:43.364 [2024-11-29 11:59:48.815978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.364 [2024-11-29 11:59:48.816131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:43.364 [2024-11-29 11:59:48.816157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.364 [2024-11-29 11:59:48.816168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:43.364 [2024-11-29 11:59:48.816597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.364 [2024-11-29 11:59:48.816622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:43.364 [2024-11-29 11:59:48.816640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.364 [2024-11-29 11:59:48.816651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:43.364 [2024-11-29 11:59:48.817074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.364 [2024-11-29 11:59:48.817100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:43.364 [2024-11-29 11:59:48.817118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.364 [2024-11-29 11:59:48.817128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:43.364 passed 00:15:43.364 Test: blockdev nvme passthru rw ...passed 00:15:43.364 Test: blockdev nvme passthru vendor specific ...[2024-11-29 11:59:48.818160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:43.364 [2024-11-29 11:59:48.818189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:43.364 [2024-11-29 11:59:48.818339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:43.364 [2024-11-29 11:59:48.818360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:43.364 passed 00:15:43.364 Test: blockdev nvme admin passthru ...[2024-11-29 11:59:48.818523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:43.364 [2024-11-29 11:59:48.818552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:43.364 [2024-11-29 11:59:48.818699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:43.364 [2024-11-29 11:59:48.818715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:43.364 passed 00:15:43.364 Test: blockdev copy ...passed 00:15:43.364 00:15:43.364 Run Summary: Type Total Ran Passed Failed Inactive 00:15:43.364 suites 1 1 n/a 0 0 00:15:43.364 tests 23 23 23 0 0 00:15:43.364 asserts 152 152 152 0 n/a 00:15:43.364 00:15:43.364 Elapsed time = 0.151 seconds 00:15:43.623 11:59:49 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:43.624 11:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.624 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:15:43.624 11:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.624 11:59:49 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:43.624 11:59:49 -- target/bdevio.sh@30 -- # nvmftestfini 00:15:43.624 11:59:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:43.624 11:59:49 -- nvmf/common.sh@116 -- # sync 00:15:43.882 11:59:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:43.882 11:59:49 -- nvmf/common.sh@119 -- # set +e 00:15:43.882 11:59:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:43.882 11:59:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:43.882 rmmod nvme_tcp 00:15:43.882 rmmod nvme_fabrics 00:15:43.882 rmmod nvme_keyring 00:15:43.882 11:59:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:43.882 11:59:49 -- nvmf/common.sh@123 -- # set -e 00:15:43.882 11:59:49 -- nvmf/common.sh@124 -- # return 0 00:15:43.882 11:59:49 -- nvmf/common.sh@477 -- # '[' -n 76538 ']' 00:15:43.882 11:59:49 -- nvmf/common.sh@478 -- # killprocess 76538 00:15:43.882 11:59:49 -- common/autotest_common.sh@936 -- # '[' -z 76538 ']' 00:15:43.882 11:59:49 -- common/autotest_common.sh@940 -- # kill -0 76538 00:15:43.882 11:59:49 -- common/autotest_common.sh@941 -- # uname 00:15:43.882 11:59:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:43.882 11:59:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76538 00:15:43.882 killing process with pid 76538 00:15:43.882 11:59:49 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:15:43.882 11:59:49 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:15:43.882 11:59:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76538' 00:15:43.882 11:59:49 -- common/autotest_common.sh@955 -- # kill 76538 00:15:43.882 11:59:49 -- common/autotest_common.sh@960 -- # wait 76538 00:15:44.141 11:59:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:44.141 11:59:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:44.141 11:59:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:44.141 11:59:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.141 11:59:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:44.141 11:59:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.141 11:59:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.141 11:59:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.141 11:59:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:44.141 00:15:44.141 real 0m3.218s 00:15:44.141 user 0m10.521s 00:15:44.141 sys 0m0.893s 00:15:44.141 11:59:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:44.141 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:15:44.141 ************************************ 00:15:44.141 END TEST nvmf_bdevio 00:15:44.141 ************************************ 00:15:44.400 11:59:49 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:15:44.400 11:59:49 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:44.400 11:59:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:44.400 11:59:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:44.400 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:15:44.400 ************************************ 00:15:44.400 START TEST nvmf_bdevio_no_huge 00:15:44.400 ************************************ 00:15:44.400 11:59:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:44.400 * Looking for test storage... 00:15:44.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:44.400 11:59:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:44.400 11:59:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:44.400 11:59:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:44.400 11:59:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:44.400 11:59:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:44.400 11:59:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:44.400 11:59:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:44.400 11:59:49 -- scripts/common.sh@335 -- # IFS=.-: 00:15:44.400 11:59:49 -- scripts/common.sh@335 -- # read -ra ver1 00:15:44.400 11:59:49 -- scripts/common.sh@336 -- # IFS=.-: 00:15:44.400 11:59:49 -- scripts/common.sh@336 -- # read -ra ver2 00:15:44.400 11:59:49 -- scripts/common.sh@337 -- # local 'op=<' 00:15:44.400 11:59:49 -- scripts/common.sh@339 -- # ver1_l=2 00:15:44.400 11:59:49 -- scripts/common.sh@340 -- # ver2_l=1 00:15:44.400 11:59:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:44.400 11:59:49 -- scripts/common.sh@343 -- # case "$op" in 00:15:44.400 11:59:49 -- scripts/common.sh@344 -- # : 1 00:15:44.400 11:59:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:44.400 11:59:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:44.400 11:59:49 -- scripts/common.sh@364 -- # decimal 1 00:15:44.401 11:59:49 -- scripts/common.sh@352 -- # local d=1 00:15:44.401 11:59:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:44.401 11:59:49 -- scripts/common.sh@354 -- # echo 1 00:15:44.401 11:59:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:44.401 11:59:49 -- scripts/common.sh@365 -- # decimal 2 00:15:44.401 11:59:49 -- scripts/common.sh@352 -- # local d=2 00:15:44.401 11:59:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:44.401 11:59:49 -- scripts/common.sh@354 -- # echo 2 00:15:44.401 11:59:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:44.401 11:59:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:44.401 11:59:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:44.401 11:59:49 -- scripts/common.sh@367 -- # return 0 00:15:44.401 11:59:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:44.401 11:59:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:44.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.401 --rc genhtml_branch_coverage=1 00:15:44.401 --rc genhtml_function_coverage=1 00:15:44.401 --rc genhtml_legend=1 00:15:44.401 --rc geninfo_all_blocks=1 00:15:44.401 --rc geninfo_unexecuted_blocks=1 00:15:44.401 00:15:44.401 ' 00:15:44.401 11:59:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:44.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.401 --rc genhtml_branch_coverage=1 00:15:44.401 --rc genhtml_function_coverage=1 00:15:44.401 --rc genhtml_legend=1 00:15:44.401 --rc geninfo_all_blocks=1 00:15:44.401 --rc geninfo_unexecuted_blocks=1 00:15:44.401 00:15:44.401 ' 00:15:44.401 11:59:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:44.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.401 --rc genhtml_branch_coverage=1 00:15:44.401 --rc genhtml_function_coverage=1 00:15:44.401 --rc genhtml_legend=1 00:15:44.401 --rc geninfo_all_blocks=1 00:15:44.401 --rc geninfo_unexecuted_blocks=1 00:15:44.401 00:15:44.401 ' 00:15:44.401 11:59:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:44.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.401 --rc genhtml_branch_coverage=1 00:15:44.401 --rc genhtml_function_coverage=1 00:15:44.401 --rc genhtml_legend=1 00:15:44.401 --rc geninfo_all_blocks=1 00:15:44.401 --rc geninfo_unexecuted_blocks=1 00:15:44.401 00:15:44.401 ' 00:15:44.401 11:59:49 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:44.401 11:59:49 -- nvmf/common.sh@7 -- # uname -s 00:15:44.401 11:59:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.401 11:59:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.401 11:59:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.401 11:59:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.401 11:59:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.401 11:59:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.401 11:59:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.401 11:59:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.401 11:59:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.401 11:59:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.401 11:59:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:15:44.401 11:59:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:15:44.401 11:59:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.401 11:59:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.401 11:59:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:44.401 11:59:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:44.401 11:59:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.401 11:59:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.401 11:59:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.401 11:59:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.401 11:59:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.401 11:59:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.401 11:59:49 -- paths/export.sh@5 -- # export PATH 00:15:44.401 11:59:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.401 11:59:49 -- nvmf/common.sh@46 -- # : 0 00:15:44.401 11:59:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:44.401 11:59:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:44.401 11:59:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:44.401 11:59:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.401 11:59:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.401 11:59:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:44.401 11:59:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:44.401 11:59:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:44.401 11:59:49 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:44.401 11:59:49 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:44.401 11:59:49 -- target/bdevio.sh@14 -- # nvmftestinit 00:15:44.401 11:59:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:44.401 11:59:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:44.401 11:59:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:44.401 11:59:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:44.401 11:59:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:44.401 11:59:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.401 11:59:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.401 11:59:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.401 11:59:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:44.401 11:59:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:44.401 11:59:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:44.401 11:59:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:44.401 11:59:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:44.401 11:59:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:44.401 11:59:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:44.401 11:59:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:44.401 11:59:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:44.401 11:59:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:44.401 11:59:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:44.401 11:59:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:44.401 11:59:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:44.401 11:59:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:44.401 11:59:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:44.401 11:59:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:44.401 11:59:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:44.401 11:59:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:44.401 11:59:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:44.661 11:59:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:44.661 Cannot find device "nvmf_tgt_br" 00:15:44.661 11:59:49 -- nvmf/common.sh@154 -- # true 00:15:44.661 11:59:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:44.661 Cannot find device "nvmf_tgt_br2" 00:15:44.661 11:59:49 -- nvmf/common.sh@155 -- # true 00:15:44.661 11:59:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:44.661 11:59:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:44.661 Cannot find device "nvmf_tgt_br" 00:15:44.661 11:59:49 -- nvmf/common.sh@157 -- # true 00:15:44.661 11:59:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:44.661 Cannot find device "nvmf_tgt_br2" 00:15:44.661 11:59:49 -- nvmf/common.sh@158 -- # true 00:15:44.661 11:59:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:44.661 11:59:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:44.661 11:59:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:44.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.661 11:59:50 -- nvmf/common.sh@161 -- # true 00:15:44.661 11:59:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:44.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.661 11:59:50 -- nvmf/common.sh@162 -- # true 00:15:44.661 11:59:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:44.661 11:59:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:44.661 11:59:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:44.661 11:59:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:44.661 11:59:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:44.661 11:59:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:44.661 11:59:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:44.661 11:59:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:44.661 11:59:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:44.661 11:59:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:44.920 11:59:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:44.920 11:59:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:44.920 11:59:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:44.920 11:59:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:44.920 11:59:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:44.920 11:59:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:44.920 11:59:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:44.920 11:59:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:44.920 11:59:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:44.920 11:59:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:44.920 11:59:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:44.920 11:59:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:44.920 11:59:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:44.920 11:59:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:44.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:15:44.920 00:15:44.920 --- 10.0.0.2 ping statistics --- 00:15:44.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.920 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:15:44.920 11:59:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:44.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:44.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:15:44.920 00:15:44.920 --- 10.0.0.3 ping statistics --- 00:15:44.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.920 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:15:44.920 11:59:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:44.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:15:44.920 00:15:44.920 --- 10.0.0.1 ping statistics --- 00:15:44.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.920 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:44.920 11:59:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.920 11:59:50 -- nvmf/common.sh@421 -- # return 0 00:15:44.920 11:59:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:44.920 11:59:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.920 11:59:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:44.920 11:59:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:44.920 11:59:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.920 11:59:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:44.920 11:59:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:44.920 11:59:50 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:44.920 11:59:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:44.920 11:59:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:44.920 11:59:50 -- common/autotest_common.sh@10 -- # set +x 00:15:44.920 11:59:50 -- nvmf/common.sh@469 -- # nvmfpid=76763 00:15:44.920 11:59:50 -- nvmf/common.sh@470 -- # waitforlisten 76763 00:15:44.920 11:59:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:44.920 11:59:50 -- common/autotest_common.sh@829 -- # '[' -z 76763 ']' 00:15:44.920 11:59:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.920 11:59:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.920 11:59:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.920 11:59:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.920 11:59:50 -- common/autotest_common.sh@10 -- # set +x 00:15:44.920 [2024-11-29 11:59:50.360422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:44.920 [2024-11-29 11:59:50.360805] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:45.178 [2024-11-29 11:59:50.509712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:45.178 [2024-11-29 11:59:50.620958] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:45.178 [2024-11-29 11:59:50.621794] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.178 [2024-11-29 11:59:50.622185] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.178 [2024-11-29 11:59:50.622757] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.178 [2024-11-29 11:59:50.623256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:45.178 [2024-11-29 11:59:50.623503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:45.178 [2024-11-29 11:59:50.623417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:45.178 [2024-11-29 11:59:50.623520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.113 11:59:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:46.113 11:59:51 -- common/autotest_common.sh@862 -- # return 0 00:15:46.113 11:59:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:46.113 11:59:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:46.113 11:59:51 -- common/autotest_common.sh@10 -- # set +x 00:15:46.113 11:59:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.113 11:59:51 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:46.113 11:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.113 11:59:51 -- common/autotest_common.sh@10 -- # set +x 00:15:46.113 [2024-11-29 11:59:51.447626] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.113 11:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.113 11:59:51 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:46.113 11:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.113 11:59:51 -- common/autotest_common.sh@10 -- # set +x 00:15:46.113 Malloc0 00:15:46.113 11:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.113 11:59:51 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:46.113 11:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.113 11:59:51 -- common/autotest_common.sh@10 -- # set +x 00:15:46.113 11:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.113 11:59:51 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:46.113 11:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.113 11:59:51 -- common/autotest_common.sh@10 -- # set +x 00:15:46.113 11:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.113 11:59:51 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.113 11:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.113 11:59:51 -- common/autotest_common.sh@10 -- # set +x 00:15:46.113 [2024-11-29 11:59:51.490916] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.113 11:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.113 11:59:51 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:46.113 11:59:51 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:46.113 11:59:51 -- nvmf/common.sh@520 -- # config=() 00:15:46.113 11:59:51 -- nvmf/common.sh@520 -- # local subsystem config 00:15:46.113 11:59:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:46.114 11:59:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:46.114 { 00:15:46.114 "params": { 00:15:46.114 "name": "Nvme$subsystem", 00:15:46.114 "trtype": "$TEST_TRANSPORT", 00:15:46.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:46.114 "adrfam": "ipv4", 00:15:46.114 "trsvcid": "$NVMF_PORT", 00:15:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:46.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:46.114 "hdgst": ${hdgst:-false}, 00:15:46.114 "ddgst": ${ddgst:-false} 00:15:46.114 }, 00:15:46.114 "method": "bdev_nvme_attach_controller" 00:15:46.114 } 00:15:46.114 EOF 00:15:46.114 )") 00:15:46.114 11:59:51 -- nvmf/common.sh@542 -- # cat 00:15:46.114 11:59:51 -- nvmf/common.sh@544 -- # jq . 00:15:46.114 11:59:51 -- nvmf/common.sh@545 -- # IFS=, 00:15:46.114 11:59:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:46.114 "params": { 00:15:46.114 "name": "Nvme1", 00:15:46.114 "trtype": "tcp", 00:15:46.114 "traddr": "10.0.0.2", 00:15:46.114 "adrfam": "ipv4", 00:15:46.114 "trsvcid": "4420", 00:15:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:46.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:46.114 "hdgst": false, 00:15:46.114 "ddgst": false 00:15:46.114 }, 00:15:46.114 "method": "bdev_nvme_attach_controller" 00:15:46.114 }' 00:15:46.114 [2024-11-29 11:59:51.549310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:46.114 [2024-11-29 11:59:51.549417] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76799 ] 00:15:46.373 [2024-11-29 11:59:51.694086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:46.373 [2024-11-29 11:59:51.833293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.373 [2024-11-29 11:59:51.833984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.373 [2024-11-29 11:59:51.834046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.632 [2024-11-29 11:59:52.022791] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:15:46.632 [2024-11-29 11:59:52.023149] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:15:46.632 I/O targets: 00:15:46.632 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:46.632 00:15:46.632 00:15:46.632 CUnit - A unit testing framework for C - Version 2.1-3 00:15:46.632 http://cunit.sourceforge.net/ 00:15:46.632 00:15:46.632 00:15:46.632 Suite: bdevio tests on: Nvme1n1 00:15:46.632 Test: blockdev write read block ...passed 00:15:46.632 Test: blockdev write zeroes read block ...passed 00:15:46.632 Test: blockdev write zeroes read no split ...passed 00:15:46.632 Test: blockdev write zeroes read split ...passed 00:15:46.632 Test: blockdev write zeroes read split partial ...passed 00:15:46.632 Test: blockdev reset ...[2024-11-29 11:59:52.073749] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:46.632 [2024-11-29 11:59:52.073916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x115d760 (9): Bad file descriptor 00:15:46.632 [2024-11-29 11:59:52.091571] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:46.632 passed 00:15:46.632 Test: blockdev write read 8 blocks ...passed 00:15:46.632 Test: blockdev write read size > 128k ...passed 00:15:46.632 Test: blockdev write read invalid size ...passed 00:15:46.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:46.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:46.632 Test: blockdev write read max offset ...passed 00:15:46.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:46.632 Test: blockdev writev readv 8 blocks ...passed 00:15:46.632 Test: blockdev writev readv 30 x 1block ...passed 00:15:46.632 Test: blockdev writev readv block ...passed 00:15:46.632 Test: blockdev writev readv size > 128k ...passed 00:15:46.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:46.632 Test: blockdev comparev and writev ...[2024-11-29 11:59:52.101097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:46.632 [2024-11-29 11:59:52.101179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:46.632 [2024-11-29 11:59:52.101201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:46.632 [2024-11-29 11:59:52.101212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:46.633 [2024-11-29 11:59:52.101603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:46.633 [2024-11-29 11:59:52.101621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:46.633 [2024-11-29 11:59:52.101638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:46.633 [2024-11-29 11:59:52.101649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:46.633 [2024-11-29 11:59:52.101995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:46.633 [2024-11-29 11:59:52.102017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:46.633 [2024-11-29 11:59:52.102035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:46.633 [2024-11-29 11:59:52.102045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:46.633 [2024-11-29 11:59:52.102452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:46.633 [2024-11-29 11:59:52.102474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:46.633 [2024-11-29 11:59:52.102491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:46.633 [2024-11-29 11:59:52.102501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:46.633 passed 00:15:46.633 Test: blockdev nvme passthru rw ...passed 00:15:46.633 Test: blockdev nvme passthru vendor specific ...passed 00:15:46.633 Test: blockdev nvme admin passthru ...[2024-11-29 11:59:52.103387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:46.633 [2024-11-29 11:59:52.103422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:46.633 [2024-11-29 11:59:52.103566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:46.633 [2024-11-29 11:59:52.103584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:46.633 [2024-11-29 11:59:52.103708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:46.633 [2024-11-29 11:59:52.103724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:46.633 [2024-11-29 11:59:52.103861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:46.633 [2024-11-29 11:59:52.103877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:46.633 passed 00:15:46.633 Test: blockdev copy ...passed 00:15:46.633 00:15:46.633 Run Summary: Type Total Ran Passed Failed Inactive 00:15:46.633 suites 1 1 n/a 0 0 00:15:46.633 tests 23 23 23 0 0 00:15:46.633 asserts 152 152 152 0 n/a 00:15:46.633 00:15:46.633 Elapsed time = 0.208 seconds 00:15:47.201 11:59:52 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:47.201 11:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.201 11:59:52 -- common/autotest_common.sh@10 -- # set +x 00:15:47.201 11:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.201 11:59:52 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:47.201 11:59:52 -- target/bdevio.sh@30 -- # nvmftestfini 00:15:47.201 11:59:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:47.201 11:59:52 -- nvmf/common.sh@116 -- # sync 00:15:47.201 11:59:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:47.201 11:59:52 -- nvmf/common.sh@119 -- # set +e 00:15:47.201 11:59:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:47.201 11:59:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:47.201 rmmod nvme_tcp 00:15:47.201 rmmod nvme_fabrics 00:15:47.201 rmmod nvme_keyring 00:15:47.201 11:59:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:47.201 11:59:52 -- nvmf/common.sh@123 -- # set -e 00:15:47.201 11:59:52 -- nvmf/common.sh@124 -- # return 0 00:15:47.201 11:59:52 -- nvmf/common.sh@477 -- # '[' -n 76763 ']' 00:15:47.201 11:59:52 -- nvmf/common.sh@478 -- # killprocess 76763 00:15:47.201 11:59:52 -- common/autotest_common.sh@936 -- # '[' -z 76763 ']' 00:15:47.201 11:59:52 -- common/autotest_common.sh@940 -- # kill -0 76763 00:15:47.201 11:59:52 -- common/autotest_common.sh@941 -- # uname 00:15:47.201 11:59:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:47.201 11:59:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76763 00:15:47.201 killing process with pid 76763 00:15:47.201 11:59:52 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:15:47.201 11:59:52 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:15:47.201 11:59:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76763' 00:15:47.201 11:59:52 -- common/autotest_common.sh@955 -- # kill 76763 00:15:47.201 11:59:52 -- common/autotest_common.sh@960 -- # wait 76763 00:15:47.770 11:59:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:47.770 11:59:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:47.770 11:59:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:47.770 11:59:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.770 11:59:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:47.770 11:59:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.770 11:59:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.770 11:59:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.770 11:59:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:47.770 ************************************ 00:15:47.770 END TEST nvmf_bdevio_no_huge 00:15:47.770 ************************************ 00:15:47.770 00:15:47.770 real 0m3.522s 00:15:47.770 user 0m11.179s 00:15:47.770 sys 0m1.473s 00:15:47.770 11:59:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:47.770 11:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:47.770 11:59:53 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:47.770 11:59:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:47.770 11:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:47.770 11:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:47.770 ************************************ 00:15:47.770 START TEST nvmf_tls 00:15:47.770 ************************************ 00:15:47.770 11:59:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:48.030 * Looking for test storage... 00:15:48.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:48.030 11:59:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:48.030 11:59:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:48.030 11:59:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:48.030 11:59:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:48.030 11:59:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:48.030 11:59:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:48.030 11:59:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:48.030 11:59:53 -- scripts/common.sh@335 -- # IFS=.-: 00:15:48.030 11:59:53 -- scripts/common.sh@335 -- # read -ra ver1 00:15:48.030 11:59:53 -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.030 11:59:53 -- scripts/common.sh@336 -- # read -ra ver2 00:15:48.030 11:59:53 -- scripts/common.sh@337 -- # local 'op=<' 00:15:48.030 11:59:53 -- scripts/common.sh@339 -- # ver1_l=2 00:15:48.030 11:59:53 -- scripts/common.sh@340 -- # ver2_l=1 00:15:48.030 11:59:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:48.030 11:59:53 -- scripts/common.sh@343 -- # case "$op" in 00:15:48.030 11:59:53 -- scripts/common.sh@344 -- # : 1 00:15:48.030 11:59:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:48.030 11:59:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.030 11:59:53 -- scripts/common.sh@364 -- # decimal 1 00:15:48.030 11:59:53 -- scripts/common.sh@352 -- # local d=1 00:15:48.030 11:59:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.030 11:59:53 -- scripts/common.sh@354 -- # echo 1 00:15:48.030 11:59:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:48.030 11:59:53 -- scripts/common.sh@365 -- # decimal 2 00:15:48.030 11:59:53 -- scripts/common.sh@352 -- # local d=2 00:15:48.030 11:59:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.030 11:59:53 -- scripts/common.sh@354 -- # echo 2 00:15:48.030 11:59:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:48.030 11:59:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:48.030 11:59:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:48.030 11:59:53 -- scripts/common.sh@367 -- # return 0 00:15:48.030 11:59:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.030 11:59:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:48.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.030 --rc genhtml_branch_coverage=1 00:15:48.030 --rc genhtml_function_coverage=1 00:15:48.030 --rc genhtml_legend=1 00:15:48.030 --rc geninfo_all_blocks=1 00:15:48.030 --rc geninfo_unexecuted_blocks=1 00:15:48.030 00:15:48.030 ' 00:15:48.030 11:59:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:48.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.030 --rc genhtml_branch_coverage=1 00:15:48.030 --rc genhtml_function_coverage=1 00:15:48.030 --rc genhtml_legend=1 00:15:48.030 --rc geninfo_all_blocks=1 00:15:48.030 --rc geninfo_unexecuted_blocks=1 00:15:48.030 00:15:48.030 ' 00:15:48.030 11:59:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:48.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.030 --rc genhtml_branch_coverage=1 00:15:48.030 --rc genhtml_function_coverage=1 00:15:48.030 --rc genhtml_legend=1 00:15:48.030 --rc geninfo_all_blocks=1 00:15:48.030 --rc geninfo_unexecuted_blocks=1 00:15:48.030 00:15:48.030 ' 00:15:48.030 11:59:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:48.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.030 --rc genhtml_branch_coverage=1 00:15:48.030 --rc genhtml_function_coverage=1 00:15:48.030 --rc genhtml_legend=1 00:15:48.030 --rc geninfo_all_blocks=1 00:15:48.030 --rc geninfo_unexecuted_blocks=1 00:15:48.030 00:15:48.030 ' 00:15:48.030 11:59:53 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.030 11:59:53 -- nvmf/common.sh@7 -- # uname -s 00:15:48.030 11:59:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.030 11:59:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.030 11:59:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.030 11:59:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.030 11:59:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.030 11:59:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.030 11:59:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.030 11:59:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.030 11:59:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.030 11:59:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.030 11:59:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:15:48.030 11:59:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:15:48.030 11:59:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.030 11:59:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.030 11:59:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.030 11:59:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.030 11:59:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.030 11:59:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.030 11:59:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.030 11:59:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.030 11:59:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.030 11:59:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.030 11:59:53 -- paths/export.sh@5 -- # export PATH 00:15:48.030 11:59:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.030 11:59:53 -- nvmf/common.sh@46 -- # : 0 00:15:48.030 11:59:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:48.030 11:59:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:48.030 11:59:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:48.030 11:59:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.030 11:59:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.030 11:59:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:48.030 11:59:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:48.030 11:59:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:48.030 11:59:53 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:48.030 11:59:53 -- target/tls.sh@71 -- # nvmftestinit 00:15:48.030 11:59:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:48.030 11:59:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.030 11:59:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:48.030 11:59:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:48.030 11:59:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:48.030 11:59:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.031 11:59:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.031 11:59:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.031 11:59:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:48.031 11:59:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:48.031 11:59:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:48.031 11:59:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:48.031 11:59:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:48.031 11:59:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:48.031 11:59:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.031 11:59:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.031 11:59:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:48.031 11:59:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:48.031 11:59:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.031 11:59:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.031 11:59:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.031 11:59:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.031 11:59:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.031 11:59:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.031 11:59:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.031 11:59:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.031 11:59:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:48.031 11:59:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:48.031 Cannot find device "nvmf_tgt_br" 00:15:48.031 11:59:53 -- nvmf/common.sh@154 -- # true 00:15:48.031 11:59:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.031 Cannot find device "nvmf_tgt_br2" 00:15:48.031 11:59:53 -- nvmf/common.sh@155 -- # true 00:15:48.031 11:59:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:48.031 11:59:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:48.031 Cannot find device "nvmf_tgt_br" 00:15:48.031 11:59:53 -- nvmf/common.sh@157 -- # true 00:15:48.031 11:59:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:48.289 Cannot find device "nvmf_tgt_br2" 00:15:48.289 11:59:53 -- nvmf/common.sh@158 -- # true 00:15:48.289 11:59:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:48.289 11:59:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:48.289 11:59:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.289 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.289 11:59:53 -- nvmf/common.sh@161 -- # true 00:15:48.289 11:59:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.289 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.289 11:59:53 -- nvmf/common.sh@162 -- # true 00:15:48.289 11:59:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.289 11:59:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.289 11:59:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.289 11:59:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.289 11:59:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:48.289 11:59:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:48.289 11:59:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.289 11:59:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:48.289 11:59:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:48.289 11:59:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:48.289 11:59:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:48.289 11:59:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:48.289 11:59:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:48.289 11:59:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.289 11:59:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.289 11:59:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:48.289 11:59:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:48.289 11:59:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:48.289 11:59:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:48.548 11:59:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:48.548 11:59:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:48.548 11:59:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:48.548 11:59:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:48.548 11:59:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:48.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:15:48.548 00:15:48.548 --- 10.0.0.2 ping statistics --- 00:15:48.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.548 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:48.548 11:59:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:48.548 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:48.548 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.152 ms 00:15:48.548 00:15:48.548 --- 10.0.0.3 ping statistics --- 00:15:48.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.548 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:15:48.548 11:59:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:48.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:48.548 00:15:48.548 --- 10.0.0.1 ping statistics --- 00:15:48.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.548 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:48.548 11:59:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.548 11:59:53 -- nvmf/common.sh@421 -- # return 0 00:15:48.548 11:59:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:48.548 11:59:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.548 11:59:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:48.548 11:59:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:48.548 11:59:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.548 11:59:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:48.548 11:59:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:48.548 11:59:53 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:48.548 11:59:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:48.548 11:59:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:48.548 11:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:48.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.548 11:59:53 -- nvmf/common.sh@469 -- # nvmfpid=76992 00:15:48.548 11:59:53 -- nvmf/common.sh@470 -- # waitforlisten 76992 00:15:48.548 11:59:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:48.548 11:59:53 -- common/autotest_common.sh@829 -- # '[' -z 76992 ']' 00:15:48.548 11:59:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.548 11:59:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.548 11:59:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.548 11:59:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.548 11:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:48.548 [2024-11-29 11:59:53.937684] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:48.548 [2024-11-29 11:59:53.938074] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.807 [2024-11-29 11:59:54.082488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.807 [2024-11-29 11:59:54.213294] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:48.807 [2024-11-29 11:59:54.213881] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.807 [2024-11-29 11:59:54.214027] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.807 [2024-11-29 11:59:54.214259] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.807 [2024-11-29 11:59:54.214402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.742 11:59:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.742 11:59:54 -- common/autotest_common.sh@862 -- # return 0 00:15:49.742 11:59:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:49.742 11:59:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:49.742 11:59:54 -- common/autotest_common.sh@10 -- # set +x 00:15:49.742 11:59:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.742 11:59:55 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:15:49.742 11:59:55 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:50.001 true 00:15:50.001 11:59:55 -- target/tls.sh@82 -- # jq -r .tls_version 00:15:50.001 11:59:55 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:50.259 11:59:55 -- target/tls.sh@82 -- # version=0 00:15:50.259 11:59:55 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:15:50.259 11:59:55 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:50.259 11:59:55 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:50.259 11:59:55 -- target/tls.sh@90 -- # jq -r .tls_version 00:15:50.518 11:59:55 -- target/tls.sh@90 -- # version=13 00:15:50.518 11:59:55 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:15:50.518 11:59:55 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:50.776 11:59:56 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:50.776 11:59:56 -- target/tls.sh@98 -- # jq -r .tls_version 00:15:51.343 11:59:56 -- target/tls.sh@98 -- # version=7 00:15:51.343 11:59:56 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:15:51.343 11:59:56 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:51.343 11:59:56 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:15:51.343 11:59:56 -- target/tls.sh@105 -- # ktls=false 00:15:51.343 11:59:56 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:15:51.343 11:59:56 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:51.601 11:59:57 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:51.601 11:59:57 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:15:51.860 11:59:57 -- target/tls.sh@113 -- # ktls=true 00:15:51.860 11:59:57 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:15:51.860 11:59:57 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:52.118 11:59:57 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:52.118 11:59:57 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:15:52.377 11:59:57 -- target/tls.sh@121 -- # ktls=false 00:15:52.377 11:59:57 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:15:52.377 11:59:57 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:15:52.377 11:59:57 -- target/tls.sh@49 -- # local key hash crc 00:15:52.377 11:59:57 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:15:52.377 11:59:57 -- target/tls.sh@51 -- # hash=01 00:15:52.377 11:59:57 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:15:52.377 11:59:57 -- target/tls.sh@52 -- # gzip -1 -c 00:15:52.377 11:59:57 -- target/tls.sh@52 -- # head -c 4 00:15:52.377 11:59:57 -- target/tls.sh@52 -- # tail -c8 00:15:52.377 11:59:57 -- target/tls.sh@52 -- # crc='p$H�' 00:15:52.377 11:59:57 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:15:52.377 11:59:57 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:15:52.377 11:59:57 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:52.377 11:59:57 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:52.377 11:59:57 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:15:52.377 11:59:57 -- target/tls.sh@49 -- # local key hash crc 00:15:52.377 11:59:57 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:15:52.377 11:59:57 -- target/tls.sh@51 -- # hash=01 00:15:52.377 11:59:57 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:15:52.377 11:59:57 -- target/tls.sh@52 -- # gzip -1 -c 00:15:52.377 11:59:57 -- target/tls.sh@52 -- # tail -c8 00:15:52.377 11:59:57 -- target/tls.sh@52 -- # head -c 4 00:15:52.377 11:59:57 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:15:52.377 11:59:57 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:15:52.377 11:59:57 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:15:52.637 11:59:57 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:52.637 11:59:57 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:52.637 11:59:57 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:15:52.637 11:59:57 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:15:52.637 11:59:57 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:52.637 11:59:57 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:52.637 11:59:57 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:15:52.637 11:59:57 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:15:52.637 11:59:57 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:52.896 11:59:58 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:53.155 11:59:58 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:15:53.155 11:59:58 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:15:53.155 11:59:58 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:53.414 [2024-11-29 11:59:58.782000] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.414 11:59:58 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:53.672 11:59:59 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:53.930 [2024-11-29 11:59:59.246135] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:53.930 [2024-11-29 11:59:59.246428] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.930 11:59:59 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:54.189 malloc0 00:15:54.189 11:59:59 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:54.447 11:59:59 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:15:54.705 12:00:00 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:06.921 Initializing NVMe Controllers 00:16:06.921 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:06.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:06.921 Initialization complete. Launching workers. 00:16:06.921 ======================================================== 00:16:06.921 Latency(us) 00:16:06.921 Device Information : IOPS MiB/s Average min max 00:16:06.921 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10272.18 40.13 6231.79 1410.22 9146.96 00:16:06.921 ======================================================== 00:16:06.921 Total : 10272.18 40.13 6231.79 1410.22 9146.96 00:16:06.921 00:16:06.922 12:00:10 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:06.922 12:00:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:06.922 12:00:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:06.922 12:00:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:06.922 12:00:10 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:06.922 12:00:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:06.922 12:00:10 -- target/tls.sh@28 -- # bdevperf_pid=77240 00:16:06.922 12:00:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:06.922 12:00:10 -- target/tls.sh@31 -- # waitforlisten 77240 /var/tmp/bdevperf.sock 00:16:06.922 12:00:10 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:06.922 12:00:10 -- common/autotest_common.sh@829 -- # '[' -z 77240 ']' 00:16:06.922 12:00:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:06.922 12:00:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.922 12:00:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:06.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:06.922 12:00:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.922 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:16:06.922 [2024-11-29 12:00:10.327863] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:06.922 [2024-11-29 12:00:10.328245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77240 ] 00:16:06.922 [2024-11-29 12:00:10.470907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.922 [2024-11-29 12:00:10.594275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.922 12:00:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.922 12:00:11 -- common/autotest_common.sh@862 -- # return 0 00:16:06.922 12:00:11 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:06.922 [2024-11-29 12:00:11.536111] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:06.922 TLSTESTn1 00:16:06.922 12:00:11 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:06.922 Running I/O for 10 seconds... 00:16:16.892 00:16:16.892 Latency(us) 00:16:16.892 [2024-11-29T12:00:22.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.892 [2024-11-29T12:00:22.403Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:16.892 Verification LBA range: start 0x0 length 0x2000 00:16:16.892 TLSTESTn1 : 10.02 5628.45 21.99 0.00 0.00 22703.22 5183.30 28120.90 00:16:16.892 [2024-11-29T12:00:22.403Z] =================================================================================================================== 00:16:16.892 [2024-11-29T12:00:22.403Z] Total : 5628.45 21.99 0.00 0.00 22703.22 5183.30 28120.90 00:16:16.892 0 00:16:16.892 12:00:21 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:16.892 12:00:21 -- target/tls.sh@45 -- # killprocess 77240 00:16:16.892 12:00:21 -- common/autotest_common.sh@936 -- # '[' -z 77240 ']' 00:16:16.892 12:00:21 -- common/autotest_common.sh@940 -- # kill -0 77240 00:16:16.892 12:00:21 -- common/autotest_common.sh@941 -- # uname 00:16:16.892 12:00:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.892 12:00:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77240 00:16:16.892 killing process with pid 77240 00:16:16.892 Received shutdown signal, test time was about 10.000000 seconds 00:16:16.892 00:16:16.892 Latency(us) 00:16:16.892 [2024-11-29T12:00:22.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.892 [2024-11-29T12:00:22.403Z] =================================================================================================================== 00:16:16.892 [2024-11-29T12:00:22.403Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:16.892 12:00:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:16.892 12:00:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:16.892 12:00:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77240' 00:16:16.892 12:00:21 -- common/autotest_common.sh@955 -- # kill 77240 00:16:16.892 12:00:21 -- common/autotest_common.sh@960 -- # wait 77240 00:16:16.892 12:00:22 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:16.892 12:00:22 -- common/autotest_common.sh@650 -- # local es=0 00:16:16.892 12:00:22 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:16.892 12:00:22 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:16.892 12:00:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:16.892 12:00:22 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:16.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:16.892 12:00:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:16.892 12:00:22 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:16.892 12:00:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:16.892 12:00:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:16.892 12:00:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:16.892 12:00:22 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:16:16.892 12:00:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:16.892 12:00:22 -- target/tls.sh@28 -- # bdevperf_pid=77373 00:16:16.892 12:00:22 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:16.892 12:00:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:16.892 12:00:22 -- target/tls.sh@31 -- # waitforlisten 77373 /var/tmp/bdevperf.sock 00:16:16.892 12:00:22 -- common/autotest_common.sh@829 -- # '[' -z 77373 ']' 00:16:16.892 12:00:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:16.892 12:00:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.892 12:00:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:16.892 12:00:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.892 12:00:22 -- common/autotest_common.sh@10 -- # set +x 00:16:16.892 [2024-11-29 12:00:22.097784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:16.892 [2024-11-29 12:00:22.098158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77373 ] 00:16:16.892 [2024-11-29 12:00:22.234189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.892 [2024-11-29 12:00:22.334533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.827 12:00:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.827 12:00:23 -- common/autotest_common.sh@862 -- # return 0 00:16:17.827 12:00:23 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:18.089 [2024-11-29 12:00:23.359236] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:18.089 [2024-11-29 12:00:23.366832] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:18.089 [2024-11-29 12:00:23.367070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197bb80 (107): Transport endpoint is not connected 00:16:18.089 [2024-11-29 12:00:23.368045] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197bb80 (9): Bad file descriptor 00:16:18.089 [2024-11-29 12:00:23.369042] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:18.089 [2024-11-29 12:00:23.369199] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:18.089 [2024-11-29 12:00:23.369304] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:18.089 request: 00:16:18.089 { 00:16:18.089 "name": "TLSTEST", 00:16:18.089 "trtype": "tcp", 00:16:18.089 "traddr": "10.0.0.2", 00:16:18.089 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.089 "adrfam": "ipv4", 00:16:18.089 "trsvcid": "4420", 00:16:18.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.089 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:16:18.089 "method": "bdev_nvme_attach_controller", 00:16:18.089 "req_id": 1 00:16:18.089 } 00:16:18.089 Got JSON-RPC error response 00:16:18.089 response: 00:16:18.089 { 00:16:18.089 "code": -32602, 00:16:18.089 "message": "Invalid parameters" 00:16:18.089 } 00:16:18.089 12:00:23 -- target/tls.sh@36 -- # killprocess 77373 00:16:18.089 12:00:23 -- common/autotest_common.sh@936 -- # '[' -z 77373 ']' 00:16:18.089 12:00:23 -- common/autotest_common.sh@940 -- # kill -0 77373 00:16:18.089 12:00:23 -- common/autotest_common.sh@941 -- # uname 00:16:18.089 12:00:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:18.089 12:00:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77373 00:16:18.089 killing process with pid 77373 00:16:18.089 Received shutdown signal, test time was about 10.000000 seconds 00:16:18.089 00:16:18.089 Latency(us) 00:16:18.089 [2024-11-29T12:00:23.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.089 [2024-11-29T12:00:23.600Z] =================================================================================================================== 00:16:18.089 [2024-11-29T12:00:23.600Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:18.089 12:00:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:18.089 12:00:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:18.089 12:00:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77373' 00:16:18.089 12:00:23 -- common/autotest_common.sh@955 -- # kill 77373 00:16:18.089 12:00:23 -- common/autotest_common.sh@960 -- # wait 77373 00:16:18.348 12:00:23 -- target/tls.sh@37 -- # return 1 00:16:18.348 12:00:23 -- common/autotest_common.sh@653 -- # es=1 00:16:18.348 12:00:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:18.348 12:00:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:18.348 12:00:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:18.348 12:00:23 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:18.348 12:00:23 -- common/autotest_common.sh@650 -- # local es=0 00:16:18.348 12:00:23 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:18.348 12:00:23 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:18.348 12:00:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.348 12:00:23 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:18.348 12:00:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.348 12:00:23 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:18.348 12:00:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:18.348 12:00:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:18.348 12:00:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:18.348 12:00:23 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:18.348 12:00:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:18.348 12:00:23 -- target/tls.sh@28 -- # bdevperf_pid=77401 00:16:18.348 12:00:23 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:18.348 12:00:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:18.348 12:00:23 -- target/tls.sh@31 -- # waitforlisten 77401 /var/tmp/bdevperf.sock 00:16:18.348 12:00:23 -- common/autotest_common.sh@829 -- # '[' -z 77401 ']' 00:16:18.348 12:00:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:18.348 12:00:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.348 12:00:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:18.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:18.348 12:00:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.348 12:00:23 -- common/autotest_common.sh@10 -- # set +x 00:16:18.348 [2024-11-29 12:00:23.680603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:18.348 [2024-11-29 12:00:23.680934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77401 ] 00:16:18.348 [2024-11-29 12:00:23.813464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.607 [2024-11-29 12:00:23.907394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.542 12:00:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:19.542 12:00:24 -- common/autotest_common.sh@862 -- # return 0 00:16:19.542 12:00:24 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:19.542 [2024-11-29 12:00:24.981373] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:19.542 [2024-11-29 12:00:24.986928] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:19.542 [2024-11-29 12:00:24.987244] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:19.542 [2024-11-29 12:00:24.987464] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:19.542 [2024-11-29 12:00:24.988255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x78db80 (107): Transport endpoint is not connected 00:16:19.542 [2024-11-29 12:00:24.989243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x78db80 (9): Bad file descriptor 00:16:19.542 [2024-11-29 12:00:24.990244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:19.542 [2024-11-29 12:00:24.990269] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:19.542 [2024-11-29 12:00:24.990280] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:19.542 request: 00:16:19.543 { 00:16:19.543 "name": "TLSTEST", 00:16:19.543 "trtype": "tcp", 00:16:19.543 "traddr": "10.0.0.2", 00:16:19.543 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:19.543 "adrfam": "ipv4", 00:16:19.543 "trsvcid": "4420", 00:16:19.543 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:19.543 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:16:19.543 "method": "bdev_nvme_attach_controller", 00:16:19.543 "req_id": 1 00:16:19.543 } 00:16:19.543 Got JSON-RPC error response 00:16:19.543 response: 00:16:19.543 { 00:16:19.543 "code": -32602, 00:16:19.543 "message": "Invalid parameters" 00:16:19.543 } 00:16:19.543 12:00:25 -- target/tls.sh@36 -- # killprocess 77401 00:16:19.543 12:00:25 -- common/autotest_common.sh@936 -- # '[' -z 77401 ']' 00:16:19.543 12:00:25 -- common/autotest_common.sh@940 -- # kill -0 77401 00:16:19.543 12:00:25 -- common/autotest_common.sh@941 -- # uname 00:16:19.543 12:00:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:19.543 12:00:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77401 00:16:19.801 killing process with pid 77401 00:16:19.801 Received shutdown signal, test time was about 10.000000 seconds 00:16:19.801 00:16:19.801 Latency(us) 00:16:19.801 [2024-11-29T12:00:25.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.801 [2024-11-29T12:00:25.312Z] =================================================================================================================== 00:16:19.801 [2024-11-29T12:00:25.312Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:19.801 12:00:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:19.801 12:00:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:19.801 12:00:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77401' 00:16:19.801 12:00:25 -- common/autotest_common.sh@955 -- # kill 77401 00:16:19.801 12:00:25 -- common/autotest_common.sh@960 -- # wait 77401 00:16:19.801 12:00:25 -- target/tls.sh@37 -- # return 1 00:16:19.801 12:00:25 -- common/autotest_common.sh@653 -- # es=1 00:16:19.801 12:00:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:19.801 12:00:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:19.801 12:00:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:19.801 12:00:25 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:19.801 12:00:25 -- common/autotest_common.sh@650 -- # local es=0 00:16:19.801 12:00:25 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:19.801 12:00:25 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:19.801 12:00:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.801 12:00:25 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:19.801 12:00:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.801 12:00:25 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:19.801 12:00:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:19.801 12:00:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:19.801 12:00:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:19.801 12:00:25 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:19.801 12:00:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:19.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:19.801 12:00:25 -- target/tls.sh@28 -- # bdevperf_pid=77431 00:16:19.801 12:00:25 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:19.801 12:00:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:19.801 12:00:25 -- target/tls.sh@31 -- # waitforlisten 77431 /var/tmp/bdevperf.sock 00:16:19.801 12:00:25 -- common/autotest_common.sh@829 -- # '[' -z 77431 ']' 00:16:19.801 12:00:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:19.801 12:00:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:19.801 12:00:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:19.801 12:00:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:19.801 12:00:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.801 [2024-11-29 12:00:25.297712] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:19.801 [2024-11-29 12:00:25.297991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77431 ] 00:16:20.059 [2024-11-29 12:00:25.432311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.059 [2024-11-29 12:00:25.526624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.995 12:00:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:20.995 12:00:26 -- common/autotest_common.sh@862 -- # return 0 00:16:20.995 12:00:26 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:21.253 [2024-11-29 12:00:26.600194] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:21.253 [2024-11-29 12:00:26.607083] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:21.253 [2024-11-29 12:00:26.607139] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:21.253 [2024-11-29 12:00:26.607240] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:21.253 [2024-11-29 12:00:26.608055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218b80 (107): Transport endpoint is not connected 00:16:21.254 [2024-11-29 12:00:26.609041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218b80 (9): Bad file descriptor 00:16:21.254 [2024-11-29 12:00:26.610035] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:21.254 [2024-11-29 12:00:26.610061] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:21.254 [2024-11-29 12:00:26.610072] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:21.254 request: 00:16:21.254 { 00:16:21.254 "name": "TLSTEST", 00:16:21.254 "trtype": "tcp", 00:16:21.254 "traddr": "10.0.0.2", 00:16:21.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.254 "adrfam": "ipv4", 00:16:21.254 "trsvcid": "4420", 00:16:21.254 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:21.254 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:16:21.254 "method": "bdev_nvme_attach_controller", 00:16:21.254 "req_id": 1 00:16:21.254 } 00:16:21.254 Got JSON-RPC error response 00:16:21.254 response: 00:16:21.254 { 00:16:21.254 "code": -32602, 00:16:21.254 "message": "Invalid parameters" 00:16:21.254 } 00:16:21.254 12:00:26 -- target/tls.sh@36 -- # killprocess 77431 00:16:21.254 12:00:26 -- common/autotest_common.sh@936 -- # '[' -z 77431 ']' 00:16:21.254 12:00:26 -- common/autotest_common.sh@940 -- # kill -0 77431 00:16:21.254 12:00:26 -- common/autotest_common.sh@941 -- # uname 00:16:21.254 12:00:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:21.254 12:00:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77431 00:16:21.254 killing process with pid 77431 00:16:21.254 Received shutdown signal, test time was about 10.000000 seconds 00:16:21.254 00:16:21.254 Latency(us) 00:16:21.254 [2024-11-29T12:00:26.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.254 [2024-11-29T12:00:26.765Z] =================================================================================================================== 00:16:21.254 [2024-11-29T12:00:26.765Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:21.254 12:00:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:21.254 12:00:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:21.254 12:00:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77431' 00:16:21.254 12:00:26 -- common/autotest_common.sh@955 -- # kill 77431 00:16:21.254 12:00:26 -- common/autotest_common.sh@960 -- # wait 77431 00:16:21.512 12:00:26 -- target/tls.sh@37 -- # return 1 00:16:21.512 12:00:26 -- common/autotest_common.sh@653 -- # es=1 00:16:21.512 12:00:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:21.512 12:00:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:21.512 12:00:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:21.512 12:00:26 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:21.512 12:00:26 -- common/autotest_common.sh@650 -- # local es=0 00:16:21.512 12:00:26 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:21.512 12:00:26 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:21.512 12:00:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.512 12:00:26 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:21.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:21.512 12:00:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.512 12:00:26 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:21.512 12:00:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:21.512 12:00:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:21.512 12:00:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:21.512 12:00:26 -- target/tls.sh@23 -- # psk= 00:16:21.512 12:00:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:21.512 12:00:26 -- target/tls.sh@28 -- # bdevperf_pid=77456 00:16:21.512 12:00:26 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:21.512 12:00:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:21.512 12:00:26 -- target/tls.sh@31 -- # waitforlisten 77456 /var/tmp/bdevperf.sock 00:16:21.512 12:00:26 -- common/autotest_common.sh@829 -- # '[' -z 77456 ']' 00:16:21.512 12:00:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:21.512 12:00:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:21.512 12:00:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:21.512 12:00:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:21.512 12:00:26 -- common/autotest_common.sh@10 -- # set +x 00:16:21.512 [2024-11-29 12:00:26.918427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:21.512 [2024-11-29 12:00:26.918790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77456 ] 00:16:21.772 [2024-11-29 12:00:27.051113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.772 [2024-11-29 12:00:27.138493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.725 12:00:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.725 12:00:27 -- common/autotest_common.sh@862 -- # return 0 00:16:22.725 12:00:27 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:22.725 [2024-11-29 12:00:28.177264] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:22.725 [2024-11-29 12:00:28.179449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a28450 (9): Bad file descriptor 00:16:22.725 [2024-11-29 12:00:28.180438] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:22.725 [2024-11-29 12:00:28.180603] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:22.725 [2024-11-29 12:00:28.180705] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:22.725 request: 00:16:22.725 { 00:16:22.725 "name": "TLSTEST", 00:16:22.725 "trtype": "tcp", 00:16:22.725 "traddr": "10.0.0.2", 00:16:22.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:22.725 "adrfam": "ipv4", 00:16:22.725 "trsvcid": "4420", 00:16:22.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.725 "method": "bdev_nvme_attach_controller", 00:16:22.725 "req_id": 1 00:16:22.725 } 00:16:22.725 Got JSON-RPC error response 00:16:22.725 response: 00:16:22.725 { 00:16:22.725 "code": -32602, 00:16:22.725 "message": "Invalid parameters" 00:16:22.725 } 00:16:22.725 12:00:28 -- target/tls.sh@36 -- # killprocess 77456 00:16:22.725 12:00:28 -- common/autotest_common.sh@936 -- # '[' -z 77456 ']' 00:16:22.725 12:00:28 -- common/autotest_common.sh@940 -- # kill -0 77456 00:16:22.725 12:00:28 -- common/autotest_common.sh@941 -- # uname 00:16:22.725 12:00:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.725 12:00:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77456 00:16:22.983 killing process with pid 77456 00:16:22.983 Received shutdown signal, test time was about 10.000000 seconds 00:16:22.983 00:16:22.983 Latency(us) 00:16:22.983 [2024-11-29T12:00:28.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.983 [2024-11-29T12:00:28.494Z] =================================================================================================================== 00:16:22.983 [2024-11-29T12:00:28.494Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:22.983 12:00:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:22.983 12:00:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:22.983 12:00:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77456' 00:16:22.983 12:00:28 -- common/autotest_common.sh@955 -- # kill 77456 00:16:22.983 12:00:28 -- common/autotest_common.sh@960 -- # wait 77456 00:16:22.983 12:00:28 -- target/tls.sh@37 -- # return 1 00:16:22.983 12:00:28 -- common/autotest_common.sh@653 -- # es=1 00:16:22.983 12:00:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.983 12:00:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.983 12:00:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.983 12:00:28 -- target/tls.sh@167 -- # killprocess 76992 00:16:22.983 12:00:28 -- common/autotest_common.sh@936 -- # '[' -z 76992 ']' 00:16:22.983 12:00:28 -- common/autotest_common.sh@940 -- # kill -0 76992 00:16:22.983 12:00:28 -- common/autotest_common.sh@941 -- # uname 00:16:22.983 12:00:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.983 12:00:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76992 00:16:22.983 killing process with pid 76992 00:16:22.983 12:00:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:22.983 12:00:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:22.983 12:00:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76992' 00:16:22.983 12:00:28 -- common/autotest_common.sh@955 -- # kill 76992 00:16:22.983 12:00:28 -- common/autotest_common.sh@960 -- # wait 76992 00:16:23.550 12:00:28 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:16:23.550 12:00:28 -- target/tls.sh@49 -- # local key hash crc 00:16:23.550 12:00:28 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:23.550 12:00:28 -- target/tls.sh@51 -- # hash=02 00:16:23.550 12:00:28 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:16:23.550 12:00:28 -- target/tls.sh@52 -- # tail -c8 00:16:23.550 12:00:28 -- target/tls.sh@52 -- # gzip -1 -c 00:16:23.550 12:00:28 -- target/tls.sh@52 -- # head -c 4 00:16:23.550 12:00:28 -- target/tls.sh@52 -- # crc='�e�'\''' 00:16:23.550 12:00:28 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:16:23.550 12:00:28 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:23.550 12:00:28 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:23.550 12:00:28 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:23.550 12:00:28 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:23.550 12:00:28 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:23.550 12:00:28 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:23.550 12:00:28 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:16:23.550 12:00:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:23.550 12:00:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:23.550 12:00:28 -- common/autotest_common.sh@10 -- # set +x 00:16:23.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.550 12:00:28 -- nvmf/common.sh@469 -- # nvmfpid=77504 00:16:23.550 12:00:28 -- nvmf/common.sh@470 -- # waitforlisten 77504 00:16:23.550 12:00:28 -- common/autotest_common.sh@829 -- # '[' -z 77504 ']' 00:16:23.550 12:00:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:23.550 12:00:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.550 12:00:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.550 12:00:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.550 12:00:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.550 12:00:28 -- common/autotest_common.sh@10 -- # set +x 00:16:23.550 [2024-11-29 12:00:28.857234] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:23.550 [2024-11-29 12:00:28.857348] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.550 [2024-11-29 12:00:28.996853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.809 [2024-11-29 12:00:29.119457] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:23.809 [2024-11-29 12:00:29.119696] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.809 [2024-11-29 12:00:29.119718] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.809 [2024-11-29 12:00:29.119729] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.809 [2024-11-29 12:00:29.119764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.774 12:00:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.774 12:00:29 -- common/autotest_common.sh@862 -- # return 0 00:16:24.774 12:00:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:24.774 12:00:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.774 12:00:29 -- common/autotest_common.sh@10 -- # set +x 00:16:24.774 12:00:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.774 12:00:29 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:24.774 12:00:29 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:24.774 12:00:29 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:24.775 [2024-11-29 12:00:30.252447] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.048 12:00:30 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:25.308 12:00:30 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:25.567 [2024-11-29 12:00:30.836603] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:25.567 [2024-11-29 12:00:30.836905] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.567 12:00:30 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:25.825 malloc0 00:16:25.825 12:00:31 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:26.084 12:00:31 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:26.344 12:00:31 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:26.344 12:00:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:26.344 12:00:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:26.344 12:00:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:26.344 12:00:31 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:16:26.344 12:00:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:26.344 12:00:31 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:26.344 12:00:31 -- target/tls.sh@28 -- # bdevperf_pid=77559 00:16:26.344 12:00:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:26.344 12:00:31 -- target/tls.sh@31 -- # waitforlisten 77559 /var/tmp/bdevperf.sock 00:16:26.344 12:00:31 -- common/autotest_common.sh@829 -- # '[' -z 77559 ']' 00:16:26.344 12:00:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.344 12:00:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.344 12:00:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.344 12:00:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.344 12:00:31 -- common/autotest_common.sh@10 -- # set +x 00:16:26.344 [2024-11-29 12:00:31.648606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:26.344 [2024-11-29 12:00:31.649286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77559 ] 00:16:26.344 [2024-11-29 12:00:31.790380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.603 [2024-11-29 12:00:31.898462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.172 12:00:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.172 12:00:32 -- common/autotest_common.sh@862 -- # return 0 00:16:27.172 12:00:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:27.431 [2024-11-29 12:00:32.861947] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:27.431 TLSTESTn1 00:16:27.690 12:00:32 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:27.690 Running I/O for 10 seconds... 00:16:37.664 00:16:37.664 Latency(us) 00:16:37.664 [2024-11-29T12:00:43.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.664 [2024-11-29T12:00:43.175Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:37.664 Verification LBA range: start 0x0 length 0x2000 00:16:37.664 TLSTESTn1 : 10.01 5258.16 20.54 0.00 0.00 24307.76 3485.32 24069.59 00:16:37.664 [2024-11-29T12:00:43.175Z] =================================================================================================================== 00:16:37.664 [2024-11-29T12:00:43.175Z] Total : 5258.16 20.54 0.00 0.00 24307.76 3485.32 24069.59 00:16:37.664 0 00:16:37.664 12:00:43 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:37.664 12:00:43 -- target/tls.sh@45 -- # killprocess 77559 00:16:37.664 12:00:43 -- common/autotest_common.sh@936 -- # '[' -z 77559 ']' 00:16:37.664 12:00:43 -- common/autotest_common.sh@940 -- # kill -0 77559 00:16:37.664 12:00:43 -- common/autotest_common.sh@941 -- # uname 00:16:37.664 12:00:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:37.664 12:00:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77559 00:16:37.923 killing process with pid 77559 00:16:37.923 Received shutdown signal, test time was about 10.000000 seconds 00:16:37.923 00:16:37.923 Latency(us) 00:16:37.923 [2024-11-29T12:00:43.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.923 [2024-11-29T12:00:43.434Z] =================================================================================================================== 00:16:37.923 [2024-11-29T12:00:43.434Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:37.923 12:00:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:37.923 12:00:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:37.923 12:00:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77559' 00:16:37.923 12:00:43 -- common/autotest_common.sh@955 -- # kill 77559 00:16:37.923 12:00:43 -- common/autotest_common.sh@960 -- # wait 77559 00:16:38.181 12:00:43 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:38.181 12:00:43 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:38.181 12:00:43 -- common/autotest_common.sh@650 -- # local es=0 00:16:38.182 12:00:43 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:38.182 12:00:43 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:38.182 12:00:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.182 12:00:43 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:38.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:38.182 12:00:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.182 12:00:43 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:38.182 12:00:43 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:38.182 12:00:43 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:38.182 12:00:43 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:38.182 12:00:43 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:16:38.182 12:00:43 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:38.182 12:00:43 -- target/tls.sh@28 -- # bdevperf_pid=77693 00:16:38.182 12:00:43 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:38.182 12:00:43 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:38.182 12:00:43 -- target/tls.sh@31 -- # waitforlisten 77693 /var/tmp/bdevperf.sock 00:16:38.182 12:00:43 -- common/autotest_common.sh@829 -- # '[' -z 77693 ']' 00:16:38.182 12:00:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:38.182 12:00:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.182 12:00:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:38.182 12:00:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.182 12:00:43 -- common/autotest_common.sh@10 -- # set +x 00:16:38.182 [2024-11-29 12:00:43.554804] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:38.182 [2024-11-29 12:00:43.555230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77693 ] 00:16:38.441 [2024-11-29 12:00:43.691548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.441 [2024-11-29 12:00:43.816534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.375 12:00:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.375 12:00:44 -- common/autotest_common.sh@862 -- # return 0 00:16:39.375 12:00:44 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:39.375 [2024-11-29 12:00:44.847372] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:39.375 [2024-11-29 12:00:44.847875] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:39.375 request: 00:16:39.375 { 00:16:39.375 "name": "TLSTEST", 00:16:39.375 "trtype": "tcp", 00:16:39.375 "traddr": "10.0.0.2", 00:16:39.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:39.376 "adrfam": "ipv4", 00:16:39.376 "trsvcid": "4420", 00:16:39.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:39.376 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:16:39.376 "method": "bdev_nvme_attach_controller", 00:16:39.376 "req_id": 1 00:16:39.376 } 00:16:39.376 Got JSON-RPC error response 00:16:39.376 response: 00:16:39.376 { 00:16:39.376 "code": -22, 00:16:39.376 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:16:39.376 } 00:16:39.376 12:00:44 -- target/tls.sh@36 -- # killprocess 77693 00:16:39.376 12:00:44 -- common/autotest_common.sh@936 -- # '[' -z 77693 ']' 00:16:39.376 12:00:44 -- common/autotest_common.sh@940 -- # kill -0 77693 00:16:39.376 12:00:44 -- common/autotest_common.sh@941 -- # uname 00:16:39.376 12:00:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:39.376 12:00:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77693 00:16:39.634 killing process with pid 77693 00:16:39.634 Received shutdown signal, test time was about 10.000000 seconds 00:16:39.634 00:16:39.634 Latency(us) 00:16:39.634 [2024-11-29T12:00:45.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.634 [2024-11-29T12:00:45.145Z] =================================================================================================================== 00:16:39.634 [2024-11-29T12:00:45.145Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:39.634 12:00:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:39.634 12:00:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:39.634 12:00:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77693' 00:16:39.634 12:00:44 -- common/autotest_common.sh@955 -- # kill 77693 00:16:39.634 12:00:44 -- common/autotest_common.sh@960 -- # wait 77693 00:16:39.892 12:00:45 -- target/tls.sh@37 -- # return 1 00:16:39.892 12:00:45 -- common/autotest_common.sh@653 -- # es=1 00:16:39.892 12:00:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:39.892 12:00:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:39.892 12:00:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:39.892 12:00:45 -- target/tls.sh@183 -- # killprocess 77504 00:16:39.892 12:00:45 -- common/autotest_common.sh@936 -- # '[' -z 77504 ']' 00:16:39.892 12:00:45 -- common/autotest_common.sh@940 -- # kill -0 77504 00:16:39.892 12:00:45 -- common/autotest_common.sh@941 -- # uname 00:16:39.892 12:00:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:39.892 12:00:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77504 00:16:39.892 killing process with pid 77504 00:16:39.892 12:00:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:39.892 12:00:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:39.892 12:00:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77504' 00:16:39.892 12:00:45 -- common/autotest_common.sh@955 -- # kill 77504 00:16:39.892 12:00:45 -- common/autotest_common.sh@960 -- # wait 77504 00:16:40.150 12:00:45 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:16:40.150 12:00:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:40.150 12:00:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:40.150 12:00:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.150 12:00:45 -- nvmf/common.sh@469 -- # nvmfpid=77731 00:16:40.150 12:00:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:40.150 12:00:45 -- nvmf/common.sh@470 -- # waitforlisten 77731 00:16:40.150 12:00:45 -- common/autotest_common.sh@829 -- # '[' -z 77731 ']' 00:16:40.150 12:00:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.150 12:00:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.150 12:00:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.150 12:00:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.150 12:00:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.150 [2024-11-29 12:00:45.632531] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:40.150 [2024-11-29 12:00:45.632777] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.409 [2024-11-29 12:00:45.768677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.409 [2024-11-29 12:00:45.893156] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:40.409 [2024-11-29 12:00:45.893661] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.409 [2024-11-29 12:00:45.893795] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.409 [2024-11-29 12:00:45.893941] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.409 [2024-11-29 12:00:45.894007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.347 12:00:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.347 12:00:46 -- common/autotest_common.sh@862 -- # return 0 00:16:41.347 12:00:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:41.347 12:00:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:41.347 12:00:46 -- common/autotest_common.sh@10 -- # set +x 00:16:41.347 12:00:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.347 12:00:46 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:41.347 12:00:46 -- common/autotest_common.sh@650 -- # local es=0 00:16:41.347 12:00:46 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:41.347 12:00:46 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:16:41.347 12:00:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.347 12:00:46 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:16:41.347 12:00:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.347 12:00:46 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:41.347 12:00:46 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:41.347 12:00:46 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:41.606 [2024-11-29 12:00:46.974665] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.606 12:00:46 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:41.865 12:00:47 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:42.123 [2024-11-29 12:00:47.502819] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:42.123 [2024-11-29 12:00:47.503155] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.123 12:00:47 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:42.382 malloc0 00:16:42.382 12:00:47 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:42.642 12:00:48 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:42.901 [2024-11-29 12:00:48.304850] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:42.901 [2024-11-29 12:00:48.304925] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:16:42.901 [2024-11-29 12:00:48.304961] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:16:42.901 request: 00:16:42.901 { 00:16:42.901 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.901 "host": "nqn.2016-06.io.spdk:host1", 00:16:42.901 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:16:42.901 "method": "nvmf_subsystem_add_host", 00:16:42.901 "req_id": 1 00:16:42.901 } 00:16:42.901 Got JSON-RPC error response 00:16:42.901 response: 00:16:42.901 { 00:16:42.901 "code": -32603, 00:16:42.901 "message": "Internal error" 00:16:42.901 } 00:16:42.901 12:00:48 -- common/autotest_common.sh@653 -- # es=1 00:16:42.901 12:00:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:42.901 12:00:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:42.902 12:00:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:42.902 12:00:48 -- target/tls.sh@189 -- # killprocess 77731 00:16:42.902 12:00:48 -- common/autotest_common.sh@936 -- # '[' -z 77731 ']' 00:16:42.902 12:00:48 -- common/autotest_common.sh@940 -- # kill -0 77731 00:16:42.902 12:00:48 -- common/autotest_common.sh@941 -- # uname 00:16:42.902 12:00:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.902 12:00:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77731 00:16:42.902 killing process with pid 77731 00:16:42.902 12:00:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:42.902 12:00:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:42.902 12:00:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77731' 00:16:42.902 12:00:48 -- common/autotest_common.sh@955 -- # kill 77731 00:16:42.902 12:00:48 -- common/autotest_common.sh@960 -- # wait 77731 00:16:43.470 12:00:48 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:43.470 12:00:48 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:16:43.470 12:00:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:43.470 12:00:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:43.470 12:00:48 -- common/autotest_common.sh@10 -- # set +x 00:16:43.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.470 12:00:48 -- nvmf/common.sh@469 -- # nvmfpid=77798 00:16:43.470 12:00:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:43.470 12:00:48 -- nvmf/common.sh@470 -- # waitforlisten 77798 00:16:43.470 12:00:48 -- common/autotest_common.sh@829 -- # '[' -z 77798 ']' 00:16:43.470 12:00:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.470 12:00:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.470 12:00:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.470 12:00:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.470 12:00:48 -- common/autotest_common.sh@10 -- # set +x 00:16:43.470 [2024-11-29 12:00:48.733076] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:43.470 [2024-11-29 12:00:48.733439] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.470 [2024-11-29 12:00:48.868603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.729 [2024-11-29 12:00:48.998658] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:43.729 [2024-11-29 12:00:48.998814] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.729 [2024-11-29 12:00:48.998828] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.729 [2024-11-29 12:00:48.998837] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.729 [2024-11-29 12:00:48.998864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.294 12:00:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.294 12:00:49 -- common/autotest_common.sh@862 -- # return 0 00:16:44.294 12:00:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:44.294 12:00:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:44.294 12:00:49 -- common/autotest_common.sh@10 -- # set +x 00:16:44.552 12:00:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.552 12:00:49 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:44.552 12:00:49 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:44.552 12:00:49 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:44.552 [2024-11-29 12:00:50.052015] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.811 12:00:50 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:45.069 12:00:50 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:45.328 [2024-11-29 12:00:50.588152] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:45.328 [2024-11-29 12:00:50.588877] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.328 12:00:50 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:45.587 malloc0 00:16:45.587 12:00:50 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:45.846 12:00:51 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:46.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:46.104 12:00:51 -- target/tls.sh@197 -- # bdevperf_pid=77854 00:16:46.104 12:00:51 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:46.104 12:00:51 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:46.104 12:00:51 -- target/tls.sh@200 -- # waitforlisten 77854 /var/tmp/bdevperf.sock 00:16:46.104 12:00:51 -- common/autotest_common.sh@829 -- # '[' -z 77854 ']' 00:16:46.104 12:00:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:46.104 12:00:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.104 12:00:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:46.104 12:00:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.104 12:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.104 [2024-11-29 12:00:51.421551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:46.104 [2024-11-29 12:00:51.421930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77854 ] 00:16:46.104 [2024-11-29 12:00:51.561052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.363 [2024-11-29 12:00:51.665853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.932 12:00:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.932 12:00:52 -- common/autotest_common.sh@862 -- # return 0 00:16:46.932 12:00:52 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:47.191 [2024-11-29 12:00:52.657356] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:47.449 TLSTESTn1 00:16:47.449 12:00:52 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:47.708 12:00:53 -- target/tls.sh@205 -- # tgtconf='{ 00:16:47.708 "subsystems": [ 00:16:47.708 { 00:16:47.708 "subsystem": "iobuf", 00:16:47.708 "config": [ 00:16:47.708 { 00:16:47.708 "method": "iobuf_set_options", 00:16:47.708 "params": { 00:16:47.708 "small_pool_count": 8192, 00:16:47.708 "large_pool_count": 1024, 00:16:47.708 "small_bufsize": 8192, 00:16:47.708 "large_bufsize": 135168 00:16:47.708 } 00:16:47.708 } 00:16:47.708 ] 00:16:47.708 }, 00:16:47.708 { 00:16:47.708 "subsystem": "sock", 00:16:47.708 "config": [ 00:16:47.708 { 00:16:47.708 "method": "sock_impl_set_options", 00:16:47.708 "params": { 00:16:47.708 "impl_name": "uring", 00:16:47.708 "recv_buf_size": 2097152, 00:16:47.708 "send_buf_size": 2097152, 00:16:47.708 "enable_recv_pipe": true, 00:16:47.708 "enable_quickack": false, 00:16:47.708 "enable_placement_id": 0, 00:16:47.708 "enable_zerocopy_send_server": false, 00:16:47.708 "enable_zerocopy_send_client": false, 00:16:47.708 "zerocopy_threshold": 0, 00:16:47.708 "tls_version": 0, 00:16:47.708 "enable_ktls": false 00:16:47.708 } 00:16:47.708 }, 00:16:47.708 { 00:16:47.708 "method": "sock_impl_set_options", 00:16:47.708 "params": { 00:16:47.708 "impl_name": "posix", 00:16:47.708 "recv_buf_size": 2097152, 00:16:47.708 "send_buf_size": 2097152, 00:16:47.708 "enable_recv_pipe": true, 00:16:47.708 "enable_quickack": false, 00:16:47.708 "enable_placement_id": 0, 00:16:47.708 "enable_zerocopy_send_server": true, 00:16:47.708 "enable_zerocopy_send_client": false, 00:16:47.708 "zerocopy_threshold": 0, 00:16:47.708 "tls_version": 0, 00:16:47.708 "enable_ktls": false 00:16:47.708 } 00:16:47.708 }, 00:16:47.708 { 00:16:47.708 "method": "sock_impl_set_options", 00:16:47.708 "params": { 00:16:47.708 "impl_name": "ssl", 00:16:47.709 "recv_buf_size": 4096, 00:16:47.709 "send_buf_size": 4096, 00:16:47.709 "enable_recv_pipe": true, 00:16:47.709 "enable_quickack": false, 00:16:47.709 "enable_placement_id": 0, 00:16:47.709 "enable_zerocopy_send_server": true, 00:16:47.709 "enable_zerocopy_send_client": false, 00:16:47.709 "zerocopy_threshold": 0, 00:16:47.709 "tls_version": 0, 00:16:47.709 "enable_ktls": false 00:16:47.709 } 00:16:47.709 } 00:16:47.709 ] 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "subsystem": "vmd", 00:16:47.709 "config": [] 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "subsystem": "accel", 00:16:47.709 "config": [ 00:16:47.709 { 00:16:47.709 "method": "accel_set_options", 00:16:47.709 "params": { 00:16:47.709 "small_cache_size": 128, 00:16:47.709 "large_cache_size": 16, 00:16:47.709 "task_count": 2048, 00:16:47.709 "sequence_count": 2048, 00:16:47.709 "buf_count": 2048 00:16:47.709 } 00:16:47.709 } 00:16:47.709 ] 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "subsystem": "bdev", 00:16:47.709 "config": [ 00:16:47.709 { 00:16:47.709 "method": "bdev_set_options", 00:16:47.709 "params": { 00:16:47.709 "bdev_io_pool_size": 65535, 00:16:47.709 "bdev_io_cache_size": 256, 00:16:47.709 "bdev_auto_examine": true, 00:16:47.709 "iobuf_small_cache_size": 128, 00:16:47.709 "iobuf_large_cache_size": 16 00:16:47.709 } 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "method": "bdev_raid_set_options", 00:16:47.709 "params": { 00:16:47.709 "process_window_size_kb": 1024 00:16:47.709 } 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "method": "bdev_iscsi_set_options", 00:16:47.709 "params": { 00:16:47.709 "timeout_sec": 30 00:16:47.709 } 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "method": "bdev_nvme_set_options", 00:16:47.709 "params": { 00:16:47.709 "action_on_timeout": "none", 00:16:47.709 "timeout_us": 0, 00:16:47.709 "timeout_admin_us": 0, 00:16:47.709 "keep_alive_timeout_ms": 10000, 00:16:47.709 "transport_retry_count": 4, 00:16:47.709 "arbitration_burst": 0, 00:16:47.709 "low_priority_weight": 0, 00:16:47.709 "medium_priority_weight": 0, 00:16:47.709 "high_priority_weight": 0, 00:16:47.709 "nvme_adminq_poll_period_us": 10000, 00:16:47.709 "nvme_ioq_poll_period_us": 0, 00:16:47.709 "io_queue_requests": 0, 00:16:47.709 "delay_cmd_submit": true, 00:16:47.709 "bdev_retry_count": 3, 00:16:47.709 "transport_ack_timeout": 0, 00:16:47.709 "ctrlr_loss_timeout_sec": 0, 00:16:47.709 "reconnect_delay_sec": 0, 00:16:47.709 "fast_io_fail_timeout_sec": 0, 00:16:47.709 "generate_uuids": false, 00:16:47.709 "transport_tos": 0, 00:16:47.709 "io_path_stat": false, 00:16:47.709 "allow_accel_sequence": false 00:16:47.709 } 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "method": "bdev_nvme_set_hotplug", 00:16:47.709 "params": { 00:16:47.709 "period_us": 100000, 00:16:47.709 "enable": false 00:16:47.709 } 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "method": "bdev_malloc_create", 00:16:47.709 "params": { 00:16:47.709 "name": "malloc0", 00:16:47.709 "num_blocks": 8192, 00:16:47.709 "block_size": 4096, 00:16:47.709 "physical_block_size": 4096, 00:16:47.709 "uuid": "af9d0e70-a0bb-492c-a9de-298eb23115d6", 00:16:47.709 "optimal_io_boundary": 0 00:16:47.709 } 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "method": "bdev_wait_for_examine" 00:16:47.709 } 00:16:47.709 ] 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "subsystem": "nbd", 00:16:47.709 "config": [] 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "subsystem": "scheduler", 00:16:47.709 "config": [ 00:16:47.709 { 00:16:47.709 "method": "framework_set_scheduler", 00:16:47.709 "params": { 00:16:47.709 "name": "static" 00:16:47.709 } 00:16:47.709 } 00:16:47.709 ] 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "subsystem": "nvmf", 00:16:47.709 "config": [ 00:16:47.709 { 00:16:47.709 "method": "nvmf_set_config", 00:16:47.709 "params": { 00:16:47.709 "discovery_filter": "match_any", 00:16:47.709 "admin_cmd_passthru": { 00:16:47.709 "identify_ctrlr": false 00:16:47.709 } 00:16:47.709 } 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "method": "nvmf_set_max_subsystems", 00:16:47.709 "params": { 00:16:47.709 "max_subsystems": 1024 00:16:47.709 } 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "method": "nvmf_set_crdt", 00:16:47.709 "params": { 00:16:47.709 "crdt1": 0, 00:16:47.709 "crdt2": 0, 00:16:47.709 "crdt3": 0 00:16:47.709 } 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "method": "nvmf_create_transport", 00:16:47.709 "params": { 00:16:47.709 "trtype": "TCP", 00:16:47.709 "max_queue_depth": 128, 00:16:47.709 "max_io_qpairs_per_ctrlr": 127, 00:16:47.709 "in_capsule_data_size": 4096, 00:16:47.709 "max_io_size": 131072, 00:16:47.709 "io_unit_size": 131072, 00:16:47.709 "max_aq_depth": 128, 00:16:47.709 "num_shared_buffers": 511, 00:16:47.709 "buf_cache_size": 4294967295, 00:16:47.709 "dif_insert_or_strip": false, 00:16:47.709 "zcopy": false, 00:16:47.709 "c2h_success": false, 00:16:47.709 "sock_priority": 0, 00:16:47.709 "abort_timeout_sec": 1 00:16:47.709 } 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "method": "nvmf_create_subsystem", 00:16:47.709 "params": { 00:16:47.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.709 "allow_any_host": false, 00:16:47.709 "serial_number": "SPDK00000000000001", 00:16:47.709 "model_number": "SPDK bdev Controller", 00:16:47.709 "max_namespaces": 10, 00:16:47.709 "min_cntlid": 1, 00:16:47.709 "max_cntlid": 65519, 00:16:47.709 "ana_reporting": false 00:16:47.709 } 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "method": "nvmf_subsystem_add_host", 00:16:47.709 "params": { 00:16:47.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.709 "host": "nqn.2016-06.io.spdk:host1", 00:16:47.709 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:16:47.709 } 00:16:47.709 }, 00:16:47.709 { 00:16:47.709 "method": "nvmf_subsystem_add_ns", 00:16:47.709 "params": { 00:16:47.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.709 "namespace": { 00:16:47.709 "nsid": 1, 00:16:47.709 "bdev_name": "malloc0", 00:16:47.709 "nguid": "AF9D0E70A0BB492CA9DE298EB23115D6", 00:16:47.709 "uuid": "af9d0e70-a0bb-492c-a9de-298eb23115d6" 00:16:47.709 } 00:16:47.709 } 00:16:47.709 }, 00:16:47.709 { 00:16:47.710 "method": "nvmf_subsystem_add_listener", 00:16:47.710 "params": { 00:16:47.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.710 "listen_address": { 00:16:47.710 "trtype": "TCP", 00:16:47.710 "adrfam": "IPv4", 00:16:47.710 "traddr": "10.0.0.2", 00:16:47.710 "trsvcid": "4420" 00:16:47.710 }, 00:16:47.710 "secure_channel": true 00:16:47.710 } 00:16:47.710 } 00:16:47.710 ] 00:16:47.710 } 00:16:47.710 ] 00:16:47.710 }' 00:16:47.710 12:00:53 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:47.968 12:00:53 -- target/tls.sh@206 -- # bdevperfconf='{ 00:16:47.968 "subsystems": [ 00:16:47.968 { 00:16:47.968 "subsystem": "iobuf", 00:16:47.968 "config": [ 00:16:47.968 { 00:16:47.968 "method": "iobuf_set_options", 00:16:47.968 "params": { 00:16:47.968 "small_pool_count": 8192, 00:16:47.968 "large_pool_count": 1024, 00:16:47.968 "small_bufsize": 8192, 00:16:47.968 "large_bufsize": 135168 00:16:47.968 } 00:16:47.968 } 00:16:47.968 ] 00:16:47.968 }, 00:16:47.968 { 00:16:47.968 "subsystem": "sock", 00:16:47.968 "config": [ 00:16:47.968 { 00:16:47.968 "method": "sock_impl_set_options", 00:16:47.968 "params": { 00:16:47.968 "impl_name": "uring", 00:16:47.968 "recv_buf_size": 2097152, 00:16:47.968 "send_buf_size": 2097152, 00:16:47.968 "enable_recv_pipe": true, 00:16:47.968 "enable_quickack": false, 00:16:47.968 "enable_placement_id": 0, 00:16:47.968 "enable_zerocopy_send_server": false, 00:16:47.968 "enable_zerocopy_send_client": false, 00:16:47.968 "zerocopy_threshold": 0, 00:16:47.968 "tls_version": 0, 00:16:47.968 "enable_ktls": false 00:16:47.968 } 00:16:47.968 }, 00:16:47.968 { 00:16:47.968 "method": "sock_impl_set_options", 00:16:47.968 "params": { 00:16:47.968 "impl_name": "posix", 00:16:47.969 "recv_buf_size": 2097152, 00:16:47.969 "send_buf_size": 2097152, 00:16:47.969 "enable_recv_pipe": true, 00:16:47.969 "enable_quickack": false, 00:16:47.969 "enable_placement_id": 0, 00:16:47.969 "enable_zerocopy_send_server": true, 00:16:47.969 "enable_zerocopy_send_client": false, 00:16:47.969 "zerocopy_threshold": 0, 00:16:47.969 "tls_version": 0, 00:16:47.969 "enable_ktls": false 00:16:47.969 } 00:16:47.969 }, 00:16:47.969 { 00:16:47.969 "method": "sock_impl_set_options", 00:16:47.969 "params": { 00:16:47.969 "impl_name": "ssl", 00:16:47.969 "recv_buf_size": 4096, 00:16:47.969 "send_buf_size": 4096, 00:16:47.969 "enable_recv_pipe": true, 00:16:47.969 "enable_quickack": false, 00:16:47.969 "enable_placement_id": 0, 00:16:47.969 "enable_zerocopy_send_server": true, 00:16:47.969 "enable_zerocopy_send_client": false, 00:16:47.969 "zerocopy_threshold": 0, 00:16:47.969 "tls_version": 0, 00:16:47.969 "enable_ktls": false 00:16:47.969 } 00:16:47.969 } 00:16:47.969 ] 00:16:47.969 }, 00:16:47.969 { 00:16:47.969 "subsystem": "vmd", 00:16:47.969 "config": [] 00:16:47.969 }, 00:16:47.969 { 00:16:47.969 "subsystem": "accel", 00:16:47.969 "config": [ 00:16:47.969 { 00:16:47.969 "method": "accel_set_options", 00:16:47.969 "params": { 00:16:47.969 "small_cache_size": 128, 00:16:47.969 "large_cache_size": 16, 00:16:47.969 "task_count": 2048, 00:16:47.969 "sequence_count": 2048, 00:16:47.969 "buf_count": 2048 00:16:47.969 } 00:16:47.969 } 00:16:47.969 ] 00:16:47.969 }, 00:16:47.969 { 00:16:47.969 "subsystem": "bdev", 00:16:47.969 "config": [ 00:16:47.969 { 00:16:47.969 "method": "bdev_set_options", 00:16:47.969 "params": { 00:16:47.969 "bdev_io_pool_size": 65535, 00:16:47.969 "bdev_io_cache_size": 256, 00:16:47.969 "bdev_auto_examine": true, 00:16:47.969 "iobuf_small_cache_size": 128, 00:16:47.969 "iobuf_large_cache_size": 16 00:16:47.969 } 00:16:47.969 }, 00:16:47.969 { 00:16:47.969 "method": "bdev_raid_set_options", 00:16:47.969 "params": { 00:16:47.969 "process_window_size_kb": 1024 00:16:47.969 } 00:16:47.969 }, 00:16:47.969 { 00:16:47.969 "method": "bdev_iscsi_set_options", 00:16:47.969 "params": { 00:16:47.969 "timeout_sec": 30 00:16:47.969 } 00:16:47.969 }, 00:16:47.969 { 00:16:47.969 "method": "bdev_nvme_set_options", 00:16:47.969 "params": { 00:16:47.969 "action_on_timeout": "none", 00:16:47.969 "timeout_us": 0, 00:16:47.969 "timeout_admin_us": 0, 00:16:47.969 "keep_alive_timeout_ms": 10000, 00:16:47.969 "transport_retry_count": 4, 00:16:47.969 "arbitration_burst": 0, 00:16:47.969 "low_priority_weight": 0, 00:16:47.969 "medium_priority_weight": 0, 00:16:47.969 "high_priority_weight": 0, 00:16:47.969 "nvme_adminq_poll_period_us": 10000, 00:16:47.969 "nvme_ioq_poll_period_us": 0, 00:16:47.969 "io_queue_requests": 512, 00:16:47.969 "delay_cmd_submit": true, 00:16:47.969 "bdev_retry_count": 3, 00:16:47.969 "transport_ack_timeout": 0, 00:16:47.969 "ctrlr_loss_timeout_sec": 0, 00:16:47.969 "reconnect_delay_sec": 0, 00:16:47.969 "fast_io_fail_timeout_sec": 0, 00:16:47.969 "generate_uuids": false, 00:16:47.969 "transport_tos": 0, 00:16:47.969 "io_path_stat": false, 00:16:47.969 "allow_accel_sequence": false 00:16:47.969 } 00:16:47.969 }, 00:16:47.969 { 00:16:47.969 "method": "bdev_nvme_attach_controller", 00:16:47.969 "params": { 00:16:47.969 "name": "TLSTEST", 00:16:47.969 "trtype": "TCP", 00:16:47.969 "adrfam": "IPv4", 00:16:47.969 "traddr": "10.0.0.2", 00:16:47.969 "trsvcid": "4420", 00:16:47.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.969 "prchk_reftag": false, 00:16:47.969 "prchk_guard": false, 00:16:47.969 "ctrlr_loss_timeout_sec": 0, 00:16:47.969 "reconnect_delay_sec": 0, 00:16:47.969 "fast_io_fail_timeout_sec": 0, 00:16:47.969 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:16:47.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:47.969 "hdgst": false, 00:16:47.969 "ddgst": false 00:16:47.969 } 00:16:47.969 }, 00:16:47.969 { 00:16:47.969 "method": "bdev_nvme_set_hotplug", 00:16:47.969 "params": { 00:16:47.969 "period_us": 100000, 00:16:47.969 "enable": false 00:16:47.969 } 00:16:47.969 }, 00:16:47.969 { 00:16:47.969 "method": "bdev_wait_for_examine" 00:16:47.969 } 00:16:47.969 ] 00:16:47.969 }, 00:16:47.969 { 00:16:47.969 "subsystem": "nbd", 00:16:47.969 "config": [] 00:16:47.969 } 00:16:47.969 ] 00:16:47.969 }' 00:16:47.969 12:00:53 -- target/tls.sh@208 -- # killprocess 77854 00:16:47.969 12:00:53 -- common/autotest_common.sh@936 -- # '[' -z 77854 ']' 00:16:47.969 12:00:53 -- common/autotest_common.sh@940 -- # kill -0 77854 00:16:47.969 12:00:53 -- common/autotest_common.sh@941 -- # uname 00:16:47.969 12:00:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:47.969 12:00:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77854 00:16:47.969 killing process with pid 77854 00:16:47.969 Received shutdown signal, test time was about 10.000000 seconds 00:16:47.969 00:16:47.969 Latency(us) 00:16:47.969 [2024-11-29T12:00:53.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.969 [2024-11-29T12:00:53.480Z] =================================================================================================================== 00:16:47.969 [2024-11-29T12:00:53.480Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:47.969 12:00:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:47.969 12:00:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:47.969 12:00:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77854' 00:16:47.969 12:00:53 -- common/autotest_common.sh@955 -- # kill 77854 00:16:47.969 12:00:53 -- common/autotest_common.sh@960 -- # wait 77854 00:16:48.228 12:00:53 -- target/tls.sh@209 -- # killprocess 77798 00:16:48.228 12:00:53 -- common/autotest_common.sh@936 -- # '[' -z 77798 ']' 00:16:48.228 12:00:53 -- common/autotest_common.sh@940 -- # kill -0 77798 00:16:48.228 12:00:53 -- common/autotest_common.sh@941 -- # uname 00:16:48.228 12:00:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.228 12:00:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77798 00:16:48.228 killing process with pid 77798 00:16:48.228 12:00:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:48.228 12:00:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:48.228 12:00:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77798' 00:16:48.228 12:00:53 -- common/autotest_common.sh@955 -- # kill 77798 00:16:48.228 12:00:53 -- common/autotest_common.sh@960 -- # wait 77798 00:16:48.796 12:00:54 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:48.796 12:00:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:48.796 12:00:54 -- target/tls.sh@212 -- # echo '{ 00:16:48.796 "subsystems": [ 00:16:48.796 { 00:16:48.796 "subsystem": "iobuf", 00:16:48.796 "config": [ 00:16:48.796 { 00:16:48.796 "method": "iobuf_set_options", 00:16:48.796 "params": { 00:16:48.796 "small_pool_count": 8192, 00:16:48.796 "large_pool_count": 1024, 00:16:48.796 "small_bufsize": 8192, 00:16:48.796 "large_bufsize": 135168 00:16:48.796 } 00:16:48.796 } 00:16:48.796 ] 00:16:48.796 }, 00:16:48.796 { 00:16:48.796 "subsystem": "sock", 00:16:48.796 "config": [ 00:16:48.796 { 00:16:48.796 "method": "sock_impl_set_options", 00:16:48.796 "params": { 00:16:48.796 "impl_name": "uring", 00:16:48.796 "recv_buf_size": 2097152, 00:16:48.796 "send_buf_size": 2097152, 00:16:48.796 "enable_recv_pipe": true, 00:16:48.796 "enable_quickack": false, 00:16:48.796 "enable_placement_id": 0, 00:16:48.796 "enable_zerocopy_send_server": false, 00:16:48.796 "enable_zerocopy_send_client": false, 00:16:48.796 "zerocopy_threshold": 0, 00:16:48.796 "tls_version": 0, 00:16:48.796 "enable_ktls": false 00:16:48.796 } 00:16:48.796 }, 00:16:48.796 { 00:16:48.796 "method": "sock_impl_set_options", 00:16:48.796 "params": { 00:16:48.796 "impl_name": "posix", 00:16:48.796 "recv_buf_size": 2097152, 00:16:48.796 "send_buf_size": 2097152, 00:16:48.796 "enable_recv_pipe": true, 00:16:48.796 "enable_quickack": false, 00:16:48.796 "enable_placement_id": 0, 00:16:48.796 "enable_zerocopy_send_server": true, 00:16:48.796 "enable_zerocopy_send_client": false, 00:16:48.796 "zerocopy_threshold": 0, 00:16:48.796 "tls_version": 0, 00:16:48.796 "enable_ktls": false 00:16:48.796 } 00:16:48.796 }, 00:16:48.796 { 00:16:48.796 "method": "sock_impl_set_options", 00:16:48.796 "params": { 00:16:48.796 "impl_name": "ssl", 00:16:48.796 "recv_buf_size": 4096, 00:16:48.796 "send_buf_size": 4096, 00:16:48.796 "enable_recv_pipe": true, 00:16:48.796 "enable_quickack": false, 00:16:48.796 "enable_placement_id": 0, 00:16:48.796 "enable_zerocopy_send_server": true, 00:16:48.796 "enable_zerocopy_send_client": false, 00:16:48.796 "zerocopy_threshold": 0, 00:16:48.796 "tls_version": 0, 00:16:48.796 "enable_ktls": false 00:16:48.796 } 00:16:48.796 } 00:16:48.796 ] 00:16:48.796 }, 00:16:48.796 { 00:16:48.796 "subsystem": "vmd", 00:16:48.796 "config": [] 00:16:48.796 }, 00:16:48.796 { 00:16:48.796 "subsystem": "accel", 00:16:48.796 "config": [ 00:16:48.796 { 00:16:48.796 "method": "accel_set_options", 00:16:48.796 "params": { 00:16:48.796 "small_cache_size": 128, 00:16:48.796 "large_cache_size": 16, 00:16:48.796 "task_count": 2048, 00:16:48.796 "sequence_count": 2048, 00:16:48.796 "buf_count": 2048 00:16:48.796 } 00:16:48.796 } 00:16:48.796 ] 00:16:48.796 }, 00:16:48.796 { 00:16:48.796 "subsystem": "bdev", 00:16:48.796 "config": [ 00:16:48.796 { 00:16:48.796 "method": "bdev_set_options", 00:16:48.796 "params": { 00:16:48.796 "bdev_io_pool_size": 65535, 00:16:48.796 "bdev_io_cache_size": 256, 00:16:48.796 "bdev_auto_examine": true, 00:16:48.796 "iobuf_small_cache_size": 128, 00:16:48.796 "iobuf_large_cache_size": 16 00:16:48.796 } 00:16:48.796 }, 00:16:48.796 { 00:16:48.796 "method": "bdev_raid_set_options", 00:16:48.796 "params": { 00:16:48.796 "process_window_size_kb": 1024 00:16:48.796 } 00:16:48.796 }, 00:16:48.796 { 00:16:48.796 "method": "bdev_iscsi_set_options", 00:16:48.796 "params": { 00:16:48.796 "timeout_sec": 30 00:16:48.796 } 00:16:48.796 }, 00:16:48.796 { 00:16:48.796 "method": "bdev_nvme_set_options", 00:16:48.796 "params": { 00:16:48.796 "action_on_timeout": "none", 00:16:48.796 "timeout_us": 0, 00:16:48.796 "timeout_admin_us": 0, 00:16:48.796 "keep_alive_timeout_ms": 10000, 00:16:48.796 "transport_retry_count": 4, 00:16:48.796 "arbitration_burst": 0, 00:16:48.796 "low_priority_weight": 0, 00:16:48.796 "medium_priority_weight": 0, 00:16:48.796 "high_priority_weight": 0, 00:16:48.796 "nvme_adminq_poll_period_us": 10000, 00:16:48.796 "nvme_ioq_poll_period_us": 0, 00:16:48.796 "io_queue_requests": 0, 00:16:48.796 "delay_cmd_submit": true, 00:16:48.796 "bdev_retry_count": 3, 00:16:48.796 "transport_ack_timeout": 0, 00:16:48.796 "ctrlr_loss_timeout_sec": 0, 00:16:48.796 "reconnect_delay_sec": 0, 00:16:48.796 "fast_io_fail_timeout_sec": 0, 00:16:48.796 "generate_uuids": false, 00:16:48.796 "transport_tos": 0, 00:16:48.796 "io_path_stat": false, 00:16:48.796 "allow_accel_sequence": false 00:16:48.796 } 00:16:48.796 }, 00:16:48.796 { 00:16:48.796 "method": "bdev_nvme_set_hotplug", 00:16:48.796 "params": { 00:16:48.796 "period_us": 100000, 00:16:48.796 "enable": false 00:16:48.796 } 00:16:48.796 }, 00:16:48.796 { 00:16:48.796 "method": "bdev_malloc_create", 00:16:48.797 "params": { 00:16:48.797 "name": "malloc0", 00:16:48.797 "num_blocks": 8192, 00:16:48.797 "block_size": 4096, 00:16:48.797 "physical_block_size": 4096, 00:16:48.797 "uuid": "af9d0e70-a0bb-492c-a9de-298eb23115d6", 00:16:48.797 "optimal_io_boundary": 0 00:16:48.797 } 00:16:48.797 }, 00:16:48.797 { 00:16:48.797 "method": "bdev_wait_for_examine" 00:16:48.797 } 00:16:48.797 ] 00:16:48.797 }, 00:16:48.797 { 00:16:48.797 "subsystem": "nbd", 00:16:48.797 "config": [] 00:16:48.797 }, 00:16:48.797 { 00:16:48.797 "subsystem": "scheduler", 00:16:48.797 "config": [ 00:16:48.797 { 00:16:48.797 "method": "framework_set_scheduler", 00:16:48.797 "params": { 00:16:48.797 "name": "static" 00:16:48.797 } 00:16:48.797 } 00:16:48.797 ] 00:16:48.797 }, 00:16:48.797 { 00:16:48.797 "subsystem": "nvmf", 00:16:48.797 "config": [ 00:16:48.797 { 00:16:48.797 "method": "nvmf_set_config", 00:16:48.797 "params": { 00:16:48.797 "discovery_filter": "match_any", 00:16:48.797 "admin_cmd_passthru": { 00:16:48.797 "identify_ctrlr": false 00:16:48.797 } 00:16:48.797 } 00:16:48.797 }, 00:16:48.797 { 00:16:48.797 "method": "nvmf_set_max_subsystems", 00:16:48.797 "params": { 00:16:48.797 "max_subsystems": 1024 00:16:48.797 } 00:16:48.797 }, 00:16:48.797 { 00:16:48.797 "method": "nvmf_set_crdt", 00:16:48.797 "params": { 00:16:48.797 "crdt1": 0, 00:16:48.797 "crdt2": 0, 00:16:48.797 "crdt3": 0 00:16:48.797 } 00:16:48.797 }, 00:16:48.797 { 00:16:48.797 "method": "nvmf_create_transport", 00:16:48.797 "params": { 00:16:48.797 "trtype": "TCP", 00:16:48.797 "max_queue_depth": 128, 00:16:48.797 "max_io_qpairs_per_ctrlr": 127, 00:16:48.797 "in_capsule_data_size": 4096, 00:16:48.797 "max_io_size": 131072, 00:16:48.797 "io_unit_size": 131072, 00:16:48.797 "max_aq_depth": 128, 00:16:48.797 "num_shared_buffers": 511, 00:16:48.797 "buf_cache_size": 4294967295, 00:16:48.797 "dif_insert_or_strip": false, 00:16:48.797 "zcopy": false, 00:16:48.797 "c2h_success": false, 00:16:48.797 "sock_priority": 0, 00:16:48.797 "abort_timeout_sec": 1 00:16:48.797 } 00:16:48.797 }, 00:16:48.797 { 00:16:48.797 "method": "nvmf_create_subsystem", 00:16:48.797 "params": { 00:16:48.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.797 "allow_any_host": false, 00:16:48.797 "serial_number": "SPDK00000000000001", 00:16:48.797 "model_number": "SPDK bdev Controller", 00:16:48.797 "max_namespaces": 10, 00:16:48.797 "min_cntlid": 1, 00:16:48.797 "max_cntlid": 65519, 00:16:48.797 "ana_reporting": false 00:16:48.797 } 00:16:48.797 }, 00:16:48.797 { 00:16:48.797 "method": "nvmf_subsystem_add_host", 00:16:48.797 "params": { 00:16:48.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.797 "host": "nqn.2016-06.io.spdk:host1", 00:16:48.797 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:16:48.797 } 00:16:48.797 }, 00:16:48.797 { 00:16:48.797 "method": "nvmf_subsystem_add_ns", 00:16:48.797 "params": { 00:16:48.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.797 "namespace": { 00:16:48.797 "nsid": 1, 00:16:48.797 "bdev_name": "malloc0", 00:16:48.797 "nguid": "AF9D0E70A0BB492CA9DE298EB23115D6", 00:16:48.797 "uuid": "af9d0e70-a0bb-492c-a9de-298eb23115d6" 00:16:48.797 } 00:16:48.797 } 00:16:48.797 }, 00:16:48.797 { 00:16:48.797 "method": "nvmf_subsystem_add_listener", 00:16:48.797 "params": { 00:16:48.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.797 "listen_address": { 00:16:48.797 "trtype": "TCP", 00:16:48.797 "adrfam": "IPv4", 00:16:48.797 "traddr": "10.0.0.2", 00:16:48.797 "trsvcid": "4420" 00:16:48.797 }, 00:16:48.797 "secure_channel": true 00:16:48.797 } 00:16:48.797 } 00:16:48.797 ] 00:16:48.797 } 00:16:48.797 ] 00:16:48.797 }' 00:16:48.797 12:00:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.797 12:00:54 -- common/autotest_common.sh@10 -- # set +x 00:16:48.797 12:00:54 -- nvmf/common.sh@469 -- # nvmfpid=77903 00:16:48.797 12:00:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:48.797 12:00:54 -- nvmf/common.sh@470 -- # waitforlisten 77903 00:16:48.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.797 12:00:54 -- common/autotest_common.sh@829 -- # '[' -z 77903 ']' 00:16:48.797 12:00:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.797 12:00:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.797 12:00:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.797 12:00:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.797 12:00:54 -- common/autotest_common.sh@10 -- # set +x 00:16:48.797 [2024-11-29 12:00:54.085618] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:48.797 [2024-11-29 12:00:54.086174] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.797 [2024-11-29 12:00:54.217402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.056 [2024-11-29 12:00:54.343594] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:49.056 [2024-11-29 12:00:54.343909] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.056 [2024-11-29 12:00:54.343931] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.056 [2024-11-29 12:00:54.343941] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.056 [2024-11-29 12:00:54.343982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.316 [2024-11-29 12:00:54.602271] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.316 [2024-11-29 12:00:54.634271] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:49.316 [2024-11-29 12:00:54.634590] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.574 12:00:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.574 12:00:55 -- common/autotest_common.sh@862 -- # return 0 00:16:49.574 12:00:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:49.574 12:00:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:49.574 12:00:55 -- common/autotest_common.sh@10 -- # set +x 00:16:49.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:49.832 12:00:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.832 12:00:55 -- target/tls.sh@216 -- # bdevperf_pid=77935 00:16:49.832 12:00:55 -- target/tls.sh@217 -- # waitforlisten 77935 /var/tmp/bdevperf.sock 00:16:49.832 12:00:55 -- common/autotest_common.sh@829 -- # '[' -z 77935 ']' 00:16:49.832 12:00:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:49.832 12:00:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:49.832 12:00:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:49.832 12:00:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:49.832 12:00:55 -- common/autotest_common.sh@10 -- # set +x 00:16:49.832 12:00:55 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:49.832 12:00:55 -- target/tls.sh@213 -- # echo '{ 00:16:49.832 "subsystems": [ 00:16:49.832 { 00:16:49.832 "subsystem": "iobuf", 00:16:49.832 "config": [ 00:16:49.832 { 00:16:49.832 "method": "iobuf_set_options", 00:16:49.832 "params": { 00:16:49.832 "small_pool_count": 8192, 00:16:49.832 "large_pool_count": 1024, 00:16:49.832 "small_bufsize": 8192, 00:16:49.832 "large_bufsize": 135168 00:16:49.832 } 00:16:49.832 } 00:16:49.832 ] 00:16:49.832 }, 00:16:49.832 { 00:16:49.832 "subsystem": "sock", 00:16:49.832 "config": [ 00:16:49.832 { 00:16:49.832 "method": "sock_impl_set_options", 00:16:49.832 "params": { 00:16:49.832 "impl_name": "uring", 00:16:49.832 "recv_buf_size": 2097152, 00:16:49.832 "send_buf_size": 2097152, 00:16:49.832 "enable_recv_pipe": true, 00:16:49.832 "enable_quickack": false, 00:16:49.832 "enable_placement_id": 0, 00:16:49.832 "enable_zerocopy_send_server": false, 00:16:49.832 "enable_zerocopy_send_client": false, 00:16:49.832 "zerocopy_threshold": 0, 00:16:49.832 "tls_version": 0, 00:16:49.832 "enable_ktls": false 00:16:49.832 } 00:16:49.832 }, 00:16:49.832 { 00:16:49.832 "method": "sock_impl_set_options", 00:16:49.832 "params": { 00:16:49.832 "impl_name": "posix", 00:16:49.832 "recv_buf_size": 2097152, 00:16:49.832 "send_buf_size": 2097152, 00:16:49.832 "enable_recv_pipe": true, 00:16:49.832 "enable_quickack": false, 00:16:49.832 "enable_placement_id": 0, 00:16:49.832 "enable_zerocopy_send_server": true, 00:16:49.832 "enable_zerocopy_send_client": false, 00:16:49.832 "zerocopy_threshold": 0, 00:16:49.832 "tls_version": 0, 00:16:49.832 "enable_ktls": false 00:16:49.832 } 00:16:49.832 }, 00:16:49.832 { 00:16:49.832 "method": "sock_impl_set_options", 00:16:49.832 "params": { 00:16:49.832 "impl_name": "ssl", 00:16:49.832 "recv_buf_size": 4096, 00:16:49.832 "send_buf_size": 4096, 00:16:49.832 "enable_recv_pipe": true, 00:16:49.832 "enable_quickack": false, 00:16:49.832 "enable_placement_id": 0, 00:16:49.832 "enable_zerocopy_send_server": true, 00:16:49.832 "enable_zerocopy_send_client": false, 00:16:49.832 "zerocopy_threshold": 0, 00:16:49.832 "tls_version": 0, 00:16:49.832 "enable_ktls": false 00:16:49.832 } 00:16:49.832 } 00:16:49.832 ] 00:16:49.832 }, 00:16:49.832 { 00:16:49.832 "subsystem": "vmd", 00:16:49.832 "config": [] 00:16:49.832 }, 00:16:49.832 { 00:16:49.832 "subsystem": "accel", 00:16:49.832 "config": [ 00:16:49.832 { 00:16:49.832 "method": "accel_set_options", 00:16:49.832 "params": { 00:16:49.832 "small_cache_size": 128, 00:16:49.832 "large_cache_size": 16, 00:16:49.832 "task_count": 2048, 00:16:49.832 "sequence_count": 2048, 00:16:49.832 "buf_count": 2048 00:16:49.832 } 00:16:49.832 } 00:16:49.832 ] 00:16:49.832 }, 00:16:49.832 { 00:16:49.832 "subsystem": "bdev", 00:16:49.832 "config": [ 00:16:49.832 { 00:16:49.832 "method": "bdev_set_options", 00:16:49.832 "params": { 00:16:49.832 "bdev_io_pool_size": 65535, 00:16:49.832 "bdev_io_cache_size": 256, 00:16:49.832 "bdev_auto_examine": true, 00:16:49.832 "iobuf_small_cache_size": 128, 00:16:49.832 "iobuf_large_cache_size": 16 00:16:49.832 } 00:16:49.832 }, 00:16:49.832 { 00:16:49.832 "method": "bdev_raid_set_options", 00:16:49.832 "params": { 00:16:49.832 "process_window_size_kb": 1024 00:16:49.832 } 00:16:49.832 }, 00:16:49.832 { 00:16:49.832 "method": "bdev_iscsi_set_options", 00:16:49.832 "params": { 00:16:49.832 "timeout_sec": 30 00:16:49.832 } 00:16:49.832 }, 00:16:49.832 { 00:16:49.832 "method": "bdev_nvme_set_options", 00:16:49.832 "params": { 00:16:49.832 "action_on_timeout": "none", 00:16:49.832 "timeout_us": 0, 00:16:49.832 "timeout_admin_us": 0, 00:16:49.832 "keep_alive_timeout_ms": 10000, 00:16:49.832 "transport_retry_count": 4, 00:16:49.832 "arbitration_burst": 0, 00:16:49.832 "low_priority_weight": 0, 00:16:49.832 "medium_priority_weight": 0, 00:16:49.832 "high_priority_weight": 0, 00:16:49.832 "nvme_adminq_poll_period_us": 10000, 00:16:49.832 "nvme_ioq_poll_period_us": 0, 00:16:49.832 "io_queue_requests": 512, 00:16:49.832 "delay_cmd_submit": true, 00:16:49.832 "bdev_retry_count": 3, 00:16:49.832 "transport_ack_timeout": 0, 00:16:49.832 "ctrlr_loss_timeout_sec": 0, 00:16:49.832 "reconnect_delay_sec": 0, 00:16:49.832 "fast_io_fail_timeout_sec": 0, 00:16:49.832 "generate_uuids": false, 00:16:49.832 "transport_tos": 0, 00:16:49.832 "io_path_stat": false, 00:16:49.832 "allow_accel_sequence": false 00:16:49.832 } 00:16:49.832 }, 00:16:49.832 { 00:16:49.832 "method": "bdev_nvme_attach_controller", 00:16:49.832 "params": { 00:16:49.832 "name": "TLSTEST", 00:16:49.832 "trtype": "TCP", 00:16:49.832 "adrfam": "IPv4", 00:16:49.832 "traddr": "10.0.0.2", 00:16:49.832 "trsvcid": "4420", 00:16:49.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.832 "prchk_reftag": false, 00:16:49.832 "prchk_guard": false, 00:16:49.832 "ctrlr_loss_timeout_sec": 0, 00:16:49.832 "reconnect_delay_sec": 0, 00:16:49.832 "fast_io_fail_timeout_sec": 0, 00:16:49.832 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:16:49.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:49.832 "hdgst": false, 00:16:49.832 "ddgst": false 00:16:49.832 } 00:16:49.832 }, 00:16:49.832 { 00:16:49.832 "method": "bdev_nvme_set_hotplug", 00:16:49.832 "params": { 00:16:49.832 "period_us": 100000, 00:16:49.832 "enable": false 00:16:49.832 } 00:16:49.832 }, 00:16:49.832 { 00:16:49.833 "method": "bdev_wait_for_examine" 00:16:49.833 } 00:16:49.833 ] 00:16:49.833 }, 00:16:49.833 { 00:16:49.833 "subsystem": "nbd", 00:16:49.833 "config": [] 00:16:49.833 } 00:16:49.833 ] 00:16:49.833 }' 00:16:49.833 [2024-11-29 12:00:55.157547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:49.833 [2024-11-29 12:00:55.158020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77935 ] 00:16:49.833 [2024-11-29 12:00:55.302290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.091 [2024-11-29 12:00:55.404151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.091 [2024-11-29 12:00:55.570461] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:50.657 12:00:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:50.657 12:00:56 -- common/autotest_common.sh@862 -- # return 0 00:16:50.657 12:00:56 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:50.916 Running I/O for 10 seconds... 00:17:00.947 00:17:00.947 Latency(us) 00:17:00.947 [2024-11-29T12:01:06.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.947 [2024-11-29T12:01:06.458Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:00.947 Verification LBA range: start 0x0 length 0x2000 00:17:00.947 TLSTESTn1 : 10.02 5824.33 22.75 0.00 0.00 21938.74 5689.72 23473.80 00:17:00.947 [2024-11-29T12:01:06.458Z] =================================================================================================================== 00:17:00.947 [2024-11-29T12:01:06.458Z] Total : 5824.33 22.75 0.00 0.00 21938.74 5689.72 23473.80 00:17:00.947 0 00:17:00.947 12:01:06 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:00.947 12:01:06 -- target/tls.sh@223 -- # killprocess 77935 00:17:00.947 12:01:06 -- common/autotest_common.sh@936 -- # '[' -z 77935 ']' 00:17:00.947 12:01:06 -- common/autotest_common.sh@940 -- # kill -0 77935 00:17:00.947 12:01:06 -- common/autotest_common.sh@941 -- # uname 00:17:00.947 12:01:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:00.947 12:01:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77935 00:17:00.947 killing process with pid 77935 00:17:00.947 Received shutdown signal, test time was about 10.000000 seconds 00:17:00.947 00:17:00.947 Latency(us) 00:17:00.947 [2024-11-29T12:01:06.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.947 [2024-11-29T12:01:06.458Z] =================================================================================================================== 00:17:00.947 [2024-11-29T12:01:06.458Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:00.947 12:01:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:00.947 12:01:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:00.947 12:01:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77935' 00:17:00.947 12:01:06 -- common/autotest_common.sh@955 -- # kill 77935 00:17:00.947 12:01:06 -- common/autotest_common.sh@960 -- # wait 77935 00:17:01.206 12:01:06 -- target/tls.sh@224 -- # killprocess 77903 00:17:01.206 12:01:06 -- common/autotest_common.sh@936 -- # '[' -z 77903 ']' 00:17:01.206 12:01:06 -- common/autotest_common.sh@940 -- # kill -0 77903 00:17:01.206 12:01:06 -- common/autotest_common.sh@941 -- # uname 00:17:01.206 12:01:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:01.206 12:01:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77903 00:17:01.206 killing process with pid 77903 00:17:01.206 12:01:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:01.206 12:01:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:01.206 12:01:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77903' 00:17:01.206 12:01:06 -- common/autotest_common.sh@955 -- # kill 77903 00:17:01.206 12:01:06 -- common/autotest_common.sh@960 -- # wait 77903 00:17:01.772 12:01:06 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:01.772 12:01:06 -- target/tls.sh@227 -- # cleanup 00:17:01.772 12:01:06 -- target/tls.sh@15 -- # process_shm --id 0 00:17:01.772 12:01:06 -- common/autotest_common.sh@806 -- # type=--id 00:17:01.772 12:01:06 -- common/autotest_common.sh@807 -- # id=0 00:17:01.772 12:01:06 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:01.772 12:01:06 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:01.772 12:01:06 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:01.772 12:01:06 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:01.772 12:01:06 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:01.772 12:01:06 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:01.772 nvmf_trace.0 00:17:01.772 12:01:07 -- common/autotest_common.sh@821 -- # return 0 00:17:01.772 12:01:07 -- target/tls.sh@16 -- # killprocess 77935 00:17:01.772 12:01:07 -- common/autotest_common.sh@936 -- # '[' -z 77935 ']' 00:17:01.772 12:01:07 -- common/autotest_common.sh@940 -- # kill -0 77935 00:17:01.772 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77935) - No such process 00:17:01.772 12:01:07 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77935 is not found' 00:17:01.772 Process with pid 77935 is not found 00:17:01.772 12:01:07 -- target/tls.sh@17 -- # nvmftestfini 00:17:01.772 12:01:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:01.772 12:01:07 -- nvmf/common.sh@116 -- # sync 00:17:01.772 12:01:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:01.772 12:01:07 -- nvmf/common.sh@119 -- # set +e 00:17:01.772 12:01:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:01.772 12:01:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:01.772 rmmod nvme_tcp 00:17:01.772 rmmod nvme_fabrics 00:17:01.772 rmmod nvme_keyring 00:17:01.772 12:01:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:01.772 12:01:07 -- nvmf/common.sh@123 -- # set -e 00:17:01.772 12:01:07 -- nvmf/common.sh@124 -- # return 0 00:17:01.772 12:01:07 -- nvmf/common.sh@477 -- # '[' -n 77903 ']' 00:17:01.772 12:01:07 -- nvmf/common.sh@478 -- # killprocess 77903 00:17:01.772 12:01:07 -- common/autotest_common.sh@936 -- # '[' -z 77903 ']' 00:17:01.772 Process with pid 77903 is not found 00:17:01.772 12:01:07 -- common/autotest_common.sh@940 -- # kill -0 77903 00:17:01.772 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77903) - No such process 00:17:01.772 12:01:07 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77903 is not found' 00:17:01.772 12:01:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:01.772 12:01:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:01.772 12:01:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:01.772 12:01:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.772 12:01:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:01.773 12:01:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.773 12:01:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.773 12:01:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.773 12:01:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:01.773 12:01:07 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:01.773 00:17:01.773 real 1m13.946s 00:17:01.773 user 1m54.745s 00:17:01.773 sys 0m25.321s 00:17:01.773 12:01:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:01.773 12:01:07 -- common/autotest_common.sh@10 -- # set +x 00:17:01.773 ************************************ 00:17:01.773 END TEST nvmf_tls 00:17:01.773 ************************************ 00:17:01.773 12:01:07 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:01.773 12:01:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:01.773 12:01:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:01.773 12:01:07 -- common/autotest_common.sh@10 -- # set +x 00:17:01.773 ************************************ 00:17:01.773 START TEST nvmf_fips 00:17:01.773 ************************************ 00:17:01.773 12:01:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:02.049 * Looking for test storage... 00:17:02.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:02.049 12:01:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:02.049 12:01:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:02.049 12:01:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:02.049 12:01:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:02.049 12:01:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:02.049 12:01:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:02.049 12:01:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:02.049 12:01:07 -- scripts/common.sh@335 -- # IFS=.-: 00:17:02.049 12:01:07 -- scripts/common.sh@335 -- # read -ra ver1 00:17:02.049 12:01:07 -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.049 12:01:07 -- scripts/common.sh@336 -- # read -ra ver2 00:17:02.049 12:01:07 -- scripts/common.sh@337 -- # local 'op=<' 00:17:02.049 12:01:07 -- scripts/common.sh@339 -- # ver1_l=2 00:17:02.049 12:01:07 -- scripts/common.sh@340 -- # ver2_l=1 00:17:02.049 12:01:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:02.049 12:01:07 -- scripts/common.sh@343 -- # case "$op" in 00:17:02.049 12:01:07 -- scripts/common.sh@344 -- # : 1 00:17:02.049 12:01:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:02.049 12:01:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.049 12:01:07 -- scripts/common.sh@364 -- # decimal 1 00:17:02.049 12:01:07 -- scripts/common.sh@352 -- # local d=1 00:17:02.049 12:01:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.049 12:01:07 -- scripts/common.sh@354 -- # echo 1 00:17:02.049 12:01:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:02.049 12:01:07 -- scripts/common.sh@365 -- # decimal 2 00:17:02.049 12:01:07 -- scripts/common.sh@352 -- # local d=2 00:17:02.049 12:01:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:02.049 12:01:07 -- scripts/common.sh@354 -- # echo 2 00:17:02.049 12:01:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:02.049 12:01:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:02.049 12:01:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:02.049 12:01:07 -- scripts/common.sh@367 -- # return 0 00:17:02.049 12:01:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:02.049 12:01:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:02.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.049 --rc genhtml_branch_coverage=1 00:17:02.049 --rc genhtml_function_coverage=1 00:17:02.049 --rc genhtml_legend=1 00:17:02.049 --rc geninfo_all_blocks=1 00:17:02.049 --rc geninfo_unexecuted_blocks=1 00:17:02.049 00:17:02.049 ' 00:17:02.049 12:01:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:02.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.049 --rc genhtml_branch_coverage=1 00:17:02.049 --rc genhtml_function_coverage=1 00:17:02.049 --rc genhtml_legend=1 00:17:02.049 --rc geninfo_all_blocks=1 00:17:02.049 --rc geninfo_unexecuted_blocks=1 00:17:02.049 00:17:02.049 ' 00:17:02.049 12:01:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:02.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.049 --rc genhtml_branch_coverage=1 00:17:02.049 --rc genhtml_function_coverage=1 00:17:02.049 --rc genhtml_legend=1 00:17:02.050 --rc geninfo_all_blocks=1 00:17:02.050 --rc geninfo_unexecuted_blocks=1 00:17:02.050 00:17:02.050 ' 00:17:02.050 12:01:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:02.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.050 --rc genhtml_branch_coverage=1 00:17:02.050 --rc genhtml_function_coverage=1 00:17:02.050 --rc genhtml_legend=1 00:17:02.050 --rc geninfo_all_blocks=1 00:17:02.050 --rc geninfo_unexecuted_blocks=1 00:17:02.050 00:17:02.050 ' 00:17:02.050 12:01:07 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:02.050 12:01:07 -- nvmf/common.sh@7 -- # uname -s 00:17:02.050 12:01:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.050 12:01:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.050 12:01:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.050 12:01:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.050 12:01:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.050 12:01:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.050 12:01:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.050 12:01:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.050 12:01:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.050 12:01:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.050 12:01:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:17:02.050 12:01:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:17:02.050 12:01:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.050 12:01:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.050 12:01:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:02.050 12:01:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:02.050 12:01:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.050 12:01:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.050 12:01:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.050 12:01:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.050 12:01:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.050 12:01:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.050 12:01:07 -- paths/export.sh@5 -- # export PATH 00:17:02.050 12:01:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.050 12:01:07 -- nvmf/common.sh@46 -- # : 0 00:17:02.050 12:01:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:02.050 12:01:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:02.050 12:01:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:02.050 12:01:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.050 12:01:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.050 12:01:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:02.050 12:01:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:02.050 12:01:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:02.050 12:01:07 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:02.050 12:01:07 -- fips/fips.sh@89 -- # check_openssl_version 00:17:02.050 12:01:07 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:02.050 12:01:07 -- fips/fips.sh@85 -- # openssl version 00:17:02.050 12:01:07 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:02.050 12:01:07 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:17:02.050 12:01:07 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:02.050 12:01:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:02.050 12:01:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:02.050 12:01:07 -- scripts/common.sh@335 -- # IFS=.-: 00:17:02.050 12:01:07 -- scripts/common.sh@335 -- # read -ra ver1 00:17:02.050 12:01:07 -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.050 12:01:07 -- scripts/common.sh@336 -- # read -ra ver2 00:17:02.050 12:01:07 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:02.050 12:01:07 -- scripts/common.sh@339 -- # ver1_l=3 00:17:02.050 12:01:07 -- scripts/common.sh@340 -- # ver2_l=3 00:17:02.050 12:01:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:02.050 12:01:07 -- scripts/common.sh@343 -- # case "$op" in 00:17:02.050 12:01:07 -- scripts/common.sh@347 -- # : 1 00:17:02.050 12:01:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:02.050 12:01:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.050 12:01:07 -- scripts/common.sh@364 -- # decimal 3 00:17:02.050 12:01:07 -- scripts/common.sh@352 -- # local d=3 00:17:02.050 12:01:07 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:02.050 12:01:07 -- scripts/common.sh@354 -- # echo 3 00:17:02.050 12:01:07 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:02.050 12:01:07 -- scripts/common.sh@365 -- # decimal 3 00:17:02.050 12:01:07 -- scripts/common.sh@352 -- # local d=3 00:17:02.050 12:01:07 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:02.050 12:01:07 -- scripts/common.sh@354 -- # echo 3 00:17:02.050 12:01:07 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:02.050 12:01:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:02.050 12:01:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:02.050 12:01:07 -- scripts/common.sh@363 -- # (( v++ )) 00:17:02.050 12:01:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.050 12:01:07 -- scripts/common.sh@364 -- # decimal 1 00:17:02.050 12:01:07 -- scripts/common.sh@352 -- # local d=1 00:17:02.050 12:01:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.050 12:01:07 -- scripts/common.sh@354 -- # echo 1 00:17:02.050 12:01:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:02.050 12:01:07 -- scripts/common.sh@365 -- # decimal 0 00:17:02.050 12:01:07 -- scripts/common.sh@352 -- # local d=0 00:17:02.050 12:01:07 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:02.050 12:01:07 -- scripts/common.sh@354 -- # echo 0 00:17:02.050 12:01:07 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:02.050 12:01:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:02.050 12:01:07 -- scripts/common.sh@366 -- # return 0 00:17:02.050 12:01:07 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:02.050 12:01:07 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:02.050 12:01:07 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:02.317 12:01:07 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:02.317 12:01:07 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:02.317 12:01:07 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:02.317 12:01:07 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:02.317 12:01:07 -- fips/fips.sh@113 -- # build_openssl_config 00:17:02.317 12:01:07 -- fips/fips.sh@37 -- # cat 00:17:02.317 12:01:07 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:02.317 12:01:07 -- fips/fips.sh@58 -- # cat - 00:17:02.317 12:01:07 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:02.317 12:01:07 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:02.317 12:01:07 -- fips/fips.sh@116 -- # mapfile -t providers 00:17:02.317 12:01:07 -- fips/fips.sh@116 -- # openssl list -providers 00:17:02.317 12:01:07 -- fips/fips.sh@116 -- # grep name 00:17:02.317 12:01:07 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:02.318 12:01:07 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:02.318 12:01:07 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:02.318 12:01:07 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:02.318 12:01:07 -- common/autotest_common.sh@650 -- # local es=0 00:17:02.318 12:01:07 -- fips/fips.sh@127 -- # : 00:17:02.318 12:01:07 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:02.318 12:01:07 -- common/autotest_common.sh@638 -- # local arg=openssl 00:17:02.318 12:01:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.318 12:01:07 -- common/autotest_common.sh@642 -- # type -t openssl 00:17:02.318 12:01:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.318 12:01:07 -- common/autotest_common.sh@644 -- # type -P openssl 00:17:02.318 12:01:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.318 12:01:07 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:17:02.318 12:01:07 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:17:02.318 12:01:07 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:17:02.318 Error setting digest 00:17:02.318 40F2BFC4B07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:02.318 40F2BFC4B07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:02.318 12:01:07 -- common/autotest_common.sh@653 -- # es=1 00:17:02.318 12:01:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:02.318 12:01:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:02.318 12:01:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:02.318 12:01:07 -- fips/fips.sh@130 -- # nvmftestinit 00:17:02.318 12:01:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:02.318 12:01:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.318 12:01:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:02.318 12:01:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:02.318 12:01:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:02.318 12:01:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.318 12:01:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.318 12:01:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.318 12:01:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:02.318 12:01:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:02.318 12:01:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:02.318 12:01:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:02.318 12:01:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:02.318 12:01:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:02.318 12:01:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.318 12:01:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.318 12:01:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:02.318 12:01:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:02.318 12:01:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:02.318 12:01:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:02.318 12:01:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:02.318 12:01:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.318 12:01:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:02.318 12:01:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:02.318 12:01:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:02.318 12:01:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:02.318 12:01:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:02.318 12:01:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:02.318 Cannot find device "nvmf_tgt_br" 00:17:02.318 12:01:07 -- nvmf/common.sh@154 -- # true 00:17:02.318 12:01:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:02.318 Cannot find device "nvmf_tgt_br2" 00:17:02.318 12:01:07 -- nvmf/common.sh@155 -- # true 00:17:02.318 12:01:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:02.318 12:01:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:02.318 Cannot find device "nvmf_tgt_br" 00:17:02.318 12:01:07 -- nvmf/common.sh@157 -- # true 00:17:02.318 12:01:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:02.318 Cannot find device "nvmf_tgt_br2" 00:17:02.318 12:01:07 -- nvmf/common.sh@158 -- # true 00:17:02.318 12:01:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:02.318 12:01:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:02.318 12:01:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:02.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.318 12:01:07 -- nvmf/common.sh@161 -- # true 00:17:02.318 12:01:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:02.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.318 12:01:07 -- nvmf/common.sh@162 -- # true 00:17:02.318 12:01:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:02.318 12:01:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:02.318 12:01:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:02.578 12:01:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:02.578 12:01:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:02.578 12:01:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:02.578 12:01:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:02.578 12:01:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:02.578 12:01:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:02.578 12:01:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:02.578 12:01:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:02.578 12:01:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:02.578 12:01:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:02.578 12:01:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:02.578 12:01:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:02.578 12:01:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:02.578 12:01:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:02.578 12:01:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:02.578 12:01:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:02.578 12:01:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:02.578 12:01:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:02.578 12:01:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:02.578 12:01:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:02.578 12:01:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:02.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:17:02.578 00:17:02.578 --- 10.0.0.2 ping statistics --- 00:17:02.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.578 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:17:02.578 12:01:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:02.578 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:02.578 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:17:02.578 00:17:02.578 --- 10.0.0.3 ping statistics --- 00:17:02.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.578 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:02.578 12:01:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:02.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:17:02.578 00:17:02.578 --- 10.0.0.1 ping statistics --- 00:17:02.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.578 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:02.578 12:01:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.578 12:01:07 -- nvmf/common.sh@421 -- # return 0 00:17:02.578 12:01:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:02.578 12:01:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.578 12:01:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:02.578 12:01:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:02.578 12:01:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.578 12:01:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:02.578 12:01:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:02.578 12:01:08 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:02.578 12:01:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:02.578 12:01:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:02.578 12:01:08 -- common/autotest_common.sh@10 -- # set +x 00:17:02.578 12:01:08 -- nvmf/common.sh@469 -- # nvmfpid=78289 00:17:02.578 12:01:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:02.578 12:01:08 -- nvmf/common.sh@470 -- # waitforlisten 78289 00:17:02.578 12:01:08 -- common/autotest_common.sh@829 -- # '[' -z 78289 ']' 00:17:02.578 12:01:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.578 12:01:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.578 12:01:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.578 12:01:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.578 12:01:08 -- common/autotest_common.sh@10 -- # set +x 00:17:02.837 [2024-11-29 12:01:08.101095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:02.837 [2024-11-29 12:01:08.101230] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.837 [2024-11-29 12:01:08.238709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.095 [2024-11-29 12:01:08.367642] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:03.095 [2024-11-29 12:01:08.367844] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.095 [2024-11-29 12:01:08.367861] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.095 [2024-11-29 12:01:08.367873] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.095 [2024-11-29 12:01:08.367905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.663 12:01:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.663 12:01:09 -- common/autotest_common.sh@862 -- # return 0 00:17:03.663 12:01:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:03.663 12:01:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:03.663 12:01:09 -- common/autotest_common.sh@10 -- # set +x 00:17:03.663 12:01:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.663 12:01:09 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:03.663 12:01:09 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:03.663 12:01:09 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:03.663 12:01:09 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:03.663 12:01:09 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:03.663 12:01:09 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:03.663 12:01:09 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:03.663 12:01:09 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.921 [2024-11-29 12:01:09.421889] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.180 [2024-11-29 12:01:09.437812] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:04.180 [2024-11-29 12:01:09.438079] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.180 malloc0 00:17:04.180 12:01:09 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:04.180 12:01:09 -- fips/fips.sh@147 -- # bdevperf_pid=78329 00:17:04.180 12:01:09 -- fips/fips.sh@148 -- # waitforlisten 78329 /var/tmp/bdevperf.sock 00:17:04.180 12:01:09 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:04.180 12:01:09 -- common/autotest_common.sh@829 -- # '[' -z 78329 ']' 00:17:04.180 12:01:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:04.180 12:01:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:04.180 12:01:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:04.180 12:01:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.180 12:01:09 -- common/autotest_common.sh@10 -- # set +x 00:17:04.180 [2024-11-29 12:01:09.583987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:04.180 [2024-11-29 12:01:09.584096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78329 ] 00:17:04.439 [2024-11-29 12:01:09.723152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.439 [2024-11-29 12:01:09.826376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.376 12:01:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.376 12:01:10 -- common/autotest_common.sh@862 -- # return 0 00:17:05.376 12:01:10 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:05.376 [2024-11-29 12:01:10.757641] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:05.376 TLSTESTn1 00:17:05.376 12:01:10 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:05.635 Running I/O for 10 seconds... 00:17:15.609 00:17:15.609 Latency(us) 00:17:15.609 [2024-11-29T12:01:21.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.609 [2024-11-29T12:01:21.120Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:15.609 Verification LBA range: start 0x0 length 0x2000 00:17:15.609 TLSTESTn1 : 10.02 5499.23 21.48 0.00 0.00 23233.88 5689.72 26571.87 00:17:15.609 [2024-11-29T12:01:21.120Z] =================================================================================================================== 00:17:15.609 [2024-11-29T12:01:21.120Z] Total : 5499.23 21.48 0.00 0.00 23233.88 5689.72 26571.87 00:17:15.609 0 00:17:15.609 12:01:21 -- fips/fips.sh@1 -- # cleanup 00:17:15.609 12:01:21 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:15.609 12:01:21 -- common/autotest_common.sh@806 -- # type=--id 00:17:15.609 12:01:21 -- common/autotest_common.sh@807 -- # id=0 00:17:15.609 12:01:21 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:15.609 12:01:21 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:15.609 12:01:21 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:15.609 12:01:21 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:15.609 12:01:21 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:15.609 12:01:21 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:15.609 nvmf_trace.0 00:17:15.609 12:01:21 -- common/autotest_common.sh@821 -- # return 0 00:17:15.609 12:01:21 -- fips/fips.sh@16 -- # killprocess 78329 00:17:15.609 12:01:21 -- common/autotest_common.sh@936 -- # '[' -z 78329 ']' 00:17:15.609 12:01:21 -- common/autotest_common.sh@940 -- # kill -0 78329 00:17:15.609 12:01:21 -- common/autotest_common.sh@941 -- # uname 00:17:15.609 12:01:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:15.609 12:01:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78329 00:17:15.868 12:01:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:15.868 12:01:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:15.868 killing process with pid 78329 00:17:15.868 12:01:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78329' 00:17:15.868 12:01:21 -- common/autotest_common.sh@955 -- # kill 78329 00:17:15.868 Received shutdown signal, test time was about 10.000000 seconds 00:17:15.868 00:17:15.868 Latency(us) 00:17:15.868 [2024-11-29T12:01:21.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.868 [2024-11-29T12:01:21.379Z] =================================================================================================================== 00:17:15.868 [2024-11-29T12:01:21.379Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:15.868 12:01:21 -- common/autotest_common.sh@960 -- # wait 78329 00:17:16.127 12:01:21 -- fips/fips.sh@17 -- # nvmftestfini 00:17:16.127 12:01:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:16.127 12:01:21 -- nvmf/common.sh@116 -- # sync 00:17:16.127 12:01:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:16.127 12:01:21 -- nvmf/common.sh@119 -- # set +e 00:17:16.127 12:01:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:16.127 12:01:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:16.127 rmmod nvme_tcp 00:17:16.127 rmmod nvme_fabrics 00:17:16.127 rmmod nvme_keyring 00:17:16.127 12:01:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:16.127 12:01:21 -- nvmf/common.sh@123 -- # set -e 00:17:16.127 12:01:21 -- nvmf/common.sh@124 -- # return 0 00:17:16.127 12:01:21 -- nvmf/common.sh@477 -- # '[' -n 78289 ']' 00:17:16.127 12:01:21 -- nvmf/common.sh@478 -- # killprocess 78289 00:17:16.127 12:01:21 -- common/autotest_common.sh@936 -- # '[' -z 78289 ']' 00:17:16.127 12:01:21 -- common/autotest_common.sh@940 -- # kill -0 78289 00:17:16.127 12:01:21 -- common/autotest_common.sh@941 -- # uname 00:17:16.127 12:01:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:16.127 12:01:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78289 00:17:16.127 12:01:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:16.127 killing process with pid 78289 00:17:16.127 12:01:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:16.127 12:01:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78289' 00:17:16.127 12:01:21 -- common/autotest_common.sh@955 -- # kill 78289 00:17:16.127 12:01:21 -- common/autotest_common.sh@960 -- # wait 78289 00:17:16.385 12:01:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:16.385 12:01:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:16.385 12:01:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:16.385 12:01:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.385 12:01:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:16.385 12:01:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.385 12:01:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.385 12:01:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.385 12:01:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:16.385 12:01:21 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:16.386 00:17:16.386 real 0m14.613s 00:17:16.386 user 0m19.895s 00:17:16.386 sys 0m5.859s 00:17:16.386 12:01:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:16.386 12:01:21 -- common/autotest_common.sh@10 -- # set +x 00:17:16.386 ************************************ 00:17:16.386 END TEST nvmf_fips 00:17:16.386 ************************************ 00:17:16.645 12:01:21 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:17:16.645 12:01:21 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:16.645 12:01:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:16.645 12:01:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:16.645 12:01:21 -- common/autotest_common.sh@10 -- # set +x 00:17:16.645 ************************************ 00:17:16.645 START TEST nvmf_fuzz 00:17:16.645 ************************************ 00:17:16.645 12:01:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:16.646 * Looking for test storage... 00:17:16.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:16.646 12:01:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:16.646 12:01:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:16.646 12:01:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:16.646 12:01:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:16.646 12:01:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:16.646 12:01:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:16.646 12:01:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:16.646 12:01:22 -- scripts/common.sh@335 -- # IFS=.-: 00:17:16.646 12:01:22 -- scripts/common.sh@335 -- # read -ra ver1 00:17:16.646 12:01:22 -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.646 12:01:22 -- scripts/common.sh@336 -- # read -ra ver2 00:17:16.646 12:01:22 -- scripts/common.sh@337 -- # local 'op=<' 00:17:16.646 12:01:22 -- scripts/common.sh@339 -- # ver1_l=2 00:17:16.646 12:01:22 -- scripts/common.sh@340 -- # ver2_l=1 00:17:16.646 12:01:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:16.646 12:01:22 -- scripts/common.sh@343 -- # case "$op" in 00:17:16.646 12:01:22 -- scripts/common.sh@344 -- # : 1 00:17:16.646 12:01:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:16.646 12:01:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.646 12:01:22 -- scripts/common.sh@364 -- # decimal 1 00:17:16.646 12:01:22 -- scripts/common.sh@352 -- # local d=1 00:17:16.646 12:01:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.646 12:01:22 -- scripts/common.sh@354 -- # echo 1 00:17:16.646 12:01:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:16.646 12:01:22 -- scripts/common.sh@365 -- # decimal 2 00:17:16.646 12:01:22 -- scripts/common.sh@352 -- # local d=2 00:17:16.646 12:01:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.646 12:01:22 -- scripts/common.sh@354 -- # echo 2 00:17:16.646 12:01:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:16.646 12:01:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:16.646 12:01:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:16.646 12:01:22 -- scripts/common.sh@367 -- # return 0 00:17:16.646 12:01:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.646 12:01:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:16.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.646 --rc genhtml_branch_coverage=1 00:17:16.646 --rc genhtml_function_coverage=1 00:17:16.646 --rc genhtml_legend=1 00:17:16.646 --rc geninfo_all_blocks=1 00:17:16.646 --rc geninfo_unexecuted_blocks=1 00:17:16.646 00:17:16.646 ' 00:17:16.646 12:01:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:16.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.646 --rc genhtml_branch_coverage=1 00:17:16.646 --rc genhtml_function_coverage=1 00:17:16.646 --rc genhtml_legend=1 00:17:16.646 --rc geninfo_all_blocks=1 00:17:16.646 --rc geninfo_unexecuted_blocks=1 00:17:16.646 00:17:16.646 ' 00:17:16.646 12:01:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:16.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.646 --rc genhtml_branch_coverage=1 00:17:16.646 --rc genhtml_function_coverage=1 00:17:16.646 --rc genhtml_legend=1 00:17:16.646 --rc geninfo_all_blocks=1 00:17:16.646 --rc geninfo_unexecuted_blocks=1 00:17:16.646 00:17:16.646 ' 00:17:16.646 12:01:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:16.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.646 --rc genhtml_branch_coverage=1 00:17:16.646 --rc genhtml_function_coverage=1 00:17:16.646 --rc genhtml_legend=1 00:17:16.646 --rc geninfo_all_blocks=1 00:17:16.646 --rc geninfo_unexecuted_blocks=1 00:17:16.646 00:17:16.646 ' 00:17:16.646 12:01:22 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:16.646 12:01:22 -- nvmf/common.sh@7 -- # uname -s 00:17:16.646 12:01:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.646 12:01:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.646 12:01:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.646 12:01:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.646 12:01:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.646 12:01:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.646 12:01:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.646 12:01:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.646 12:01:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.646 12:01:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.646 12:01:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:17:16.646 12:01:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:17:16.646 12:01:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.646 12:01:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.646 12:01:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:16.646 12:01:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:16.646 12:01:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.646 12:01:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.646 12:01:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.646 12:01:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.646 12:01:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.646 12:01:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.646 12:01:22 -- paths/export.sh@5 -- # export PATH 00:17:16.646 12:01:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.646 12:01:22 -- nvmf/common.sh@46 -- # : 0 00:17:16.646 12:01:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:16.646 12:01:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:16.646 12:01:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:16.646 12:01:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.646 12:01:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.646 12:01:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:16.646 12:01:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:16.646 12:01:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:16.646 12:01:22 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:16.646 12:01:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:16.646 12:01:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.646 12:01:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:16.646 12:01:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:16.646 12:01:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:16.646 12:01:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.646 12:01:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.646 12:01:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.905 12:01:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:16.905 12:01:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:16.905 12:01:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:16.905 12:01:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:16.905 12:01:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:16.905 12:01:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:16.905 12:01:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.905 12:01:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.905 12:01:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:16.905 12:01:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:16.905 12:01:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:16.905 12:01:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:16.905 12:01:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:16.905 12:01:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.905 12:01:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:16.905 12:01:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:16.905 12:01:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:16.906 12:01:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:16.906 12:01:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:16.906 12:01:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:16.906 Cannot find device "nvmf_tgt_br" 00:17:16.906 12:01:22 -- nvmf/common.sh@154 -- # true 00:17:16.906 12:01:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:16.906 Cannot find device "nvmf_tgt_br2" 00:17:16.906 12:01:22 -- nvmf/common.sh@155 -- # true 00:17:16.906 12:01:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:16.906 12:01:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:16.906 Cannot find device "nvmf_tgt_br" 00:17:16.906 12:01:22 -- nvmf/common.sh@157 -- # true 00:17:16.906 12:01:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:16.906 Cannot find device "nvmf_tgt_br2" 00:17:16.906 12:01:22 -- nvmf/common.sh@158 -- # true 00:17:16.906 12:01:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:16.906 12:01:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:16.906 12:01:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:16.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.906 12:01:22 -- nvmf/common.sh@161 -- # true 00:17:16.906 12:01:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:16.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.906 12:01:22 -- nvmf/common.sh@162 -- # true 00:17:16.906 12:01:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:16.906 12:01:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:16.906 12:01:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:16.906 12:01:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:16.906 12:01:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:16.906 12:01:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:16.906 12:01:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:16.906 12:01:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:16.906 12:01:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:16.906 12:01:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:16.906 12:01:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:16.906 12:01:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:16.906 12:01:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:16.906 12:01:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:16.906 12:01:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:17.165 12:01:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:17.165 12:01:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:17.165 12:01:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:17.165 12:01:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:17.165 12:01:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:17.165 12:01:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:17.165 12:01:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:17.165 12:01:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:17.165 12:01:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:17.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:17:17.165 00:17:17.165 --- 10.0.0.2 ping statistics --- 00:17:17.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.165 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:17:17.165 12:01:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:17.165 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:17.165 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:17:17.165 00:17:17.165 --- 10.0.0.3 ping statistics --- 00:17:17.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.165 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:17.165 12:01:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:17.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:17:17.165 00:17:17.165 --- 10.0.0.1 ping statistics --- 00:17:17.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.165 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:17.165 12:01:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.165 12:01:22 -- nvmf/common.sh@421 -- # return 0 00:17:17.165 12:01:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:17.165 12:01:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.165 12:01:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:17.165 12:01:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:17.165 12:01:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.165 12:01:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:17.165 12:01:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:17.165 12:01:22 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=78661 00:17:17.165 12:01:22 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:17.165 12:01:22 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 78661 00:17:17.165 12:01:22 -- common/autotest_common.sh@829 -- # '[' -z 78661 ']' 00:17:17.165 12:01:22 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:17.165 12:01:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.165 12:01:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.165 12:01:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.165 12:01:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.165 12:01:22 -- common/autotest_common.sh@10 -- # set +x 00:17:18.102 12:01:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.102 12:01:23 -- common/autotest_common.sh@862 -- # return 0 00:17:18.102 12:01:23 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.102 12:01:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.102 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:17:18.361 12:01:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.361 12:01:23 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:18.361 12:01:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.361 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:17:18.361 Malloc0 00:17:18.361 12:01:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.361 12:01:23 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:18.361 12:01:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.361 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:17:18.361 12:01:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.361 12:01:23 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:18.361 12:01:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.361 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:17:18.361 12:01:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.361 12:01:23 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.361 12:01:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.361 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:17:18.361 12:01:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.361 12:01:23 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:18.361 12:01:23 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:17:18.620 Shutting down the fuzz application 00:17:18.620 12:01:24 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:17:18.878 Shutting down the fuzz application 00:17:18.878 12:01:24 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.878 12:01:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.878 12:01:24 -- common/autotest_common.sh@10 -- # set +x 00:17:18.878 12:01:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.878 12:01:24 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:18.878 12:01:24 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:17:18.878 12:01:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:18.878 12:01:24 -- nvmf/common.sh@116 -- # sync 00:17:19.147 12:01:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:19.147 12:01:24 -- nvmf/common.sh@119 -- # set +e 00:17:19.147 12:01:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:19.147 12:01:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:19.147 rmmod nvme_tcp 00:17:19.147 rmmod nvme_fabrics 00:17:19.147 rmmod nvme_keyring 00:17:19.147 12:01:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:19.147 12:01:24 -- nvmf/common.sh@123 -- # set -e 00:17:19.147 12:01:24 -- nvmf/common.sh@124 -- # return 0 00:17:19.147 12:01:24 -- nvmf/common.sh@477 -- # '[' -n 78661 ']' 00:17:19.147 12:01:24 -- nvmf/common.sh@478 -- # killprocess 78661 00:17:19.147 12:01:24 -- common/autotest_common.sh@936 -- # '[' -z 78661 ']' 00:17:19.147 12:01:24 -- common/autotest_common.sh@940 -- # kill -0 78661 00:17:19.147 12:01:24 -- common/autotest_common.sh@941 -- # uname 00:17:19.147 12:01:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.147 12:01:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78661 00:17:19.147 12:01:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:19.147 12:01:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:19.147 killing process with pid 78661 00:17:19.147 12:01:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78661' 00:17:19.147 12:01:24 -- common/autotest_common.sh@955 -- # kill 78661 00:17:19.147 12:01:24 -- common/autotest_common.sh@960 -- # wait 78661 00:17:19.416 12:01:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:19.416 12:01:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:19.416 12:01:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:19.416 12:01:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:19.416 12:01:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:19.416 12:01:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.417 12:01:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.417 12:01:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.417 12:01:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:19.417 12:01:24 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:17:19.417 ************************************ 00:17:19.417 END TEST nvmf_fuzz 00:17:19.417 ************************************ 00:17:19.417 00:17:19.417 real 0m2.984s 00:17:19.417 user 0m3.128s 00:17:19.417 sys 0m0.708s 00:17:19.417 12:01:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:19.417 12:01:24 -- common/autotest_common.sh@10 -- # set +x 00:17:19.675 12:01:24 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:19.675 12:01:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:19.675 12:01:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:19.675 12:01:24 -- common/autotest_common.sh@10 -- # set +x 00:17:19.675 ************************************ 00:17:19.675 START TEST nvmf_multiconnection 00:17:19.675 ************************************ 00:17:19.675 12:01:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:19.675 * Looking for test storage... 00:17:19.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:19.675 12:01:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:19.675 12:01:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:19.675 12:01:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:19.675 12:01:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:19.675 12:01:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:19.675 12:01:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:19.675 12:01:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:19.675 12:01:25 -- scripts/common.sh@335 -- # IFS=.-: 00:17:19.675 12:01:25 -- scripts/common.sh@335 -- # read -ra ver1 00:17:19.675 12:01:25 -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.675 12:01:25 -- scripts/common.sh@336 -- # read -ra ver2 00:17:19.675 12:01:25 -- scripts/common.sh@337 -- # local 'op=<' 00:17:19.675 12:01:25 -- scripts/common.sh@339 -- # ver1_l=2 00:17:19.675 12:01:25 -- scripts/common.sh@340 -- # ver2_l=1 00:17:19.675 12:01:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:19.675 12:01:25 -- scripts/common.sh@343 -- # case "$op" in 00:17:19.675 12:01:25 -- scripts/common.sh@344 -- # : 1 00:17:19.675 12:01:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:19.675 12:01:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.675 12:01:25 -- scripts/common.sh@364 -- # decimal 1 00:17:19.675 12:01:25 -- scripts/common.sh@352 -- # local d=1 00:17:19.675 12:01:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.675 12:01:25 -- scripts/common.sh@354 -- # echo 1 00:17:19.675 12:01:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:19.935 12:01:25 -- scripts/common.sh@365 -- # decimal 2 00:17:19.935 12:01:25 -- scripts/common.sh@352 -- # local d=2 00:17:19.935 12:01:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.935 12:01:25 -- scripts/common.sh@354 -- # echo 2 00:17:19.935 12:01:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:19.935 12:01:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:19.935 12:01:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:19.935 12:01:25 -- scripts/common.sh@367 -- # return 0 00:17:19.936 12:01:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.936 12:01:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:19.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.936 --rc genhtml_branch_coverage=1 00:17:19.936 --rc genhtml_function_coverage=1 00:17:19.936 --rc genhtml_legend=1 00:17:19.936 --rc geninfo_all_blocks=1 00:17:19.936 --rc geninfo_unexecuted_blocks=1 00:17:19.936 00:17:19.936 ' 00:17:19.936 12:01:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:19.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.936 --rc genhtml_branch_coverage=1 00:17:19.936 --rc genhtml_function_coverage=1 00:17:19.936 --rc genhtml_legend=1 00:17:19.936 --rc geninfo_all_blocks=1 00:17:19.936 --rc geninfo_unexecuted_blocks=1 00:17:19.936 00:17:19.936 ' 00:17:19.936 12:01:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:19.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.936 --rc genhtml_branch_coverage=1 00:17:19.936 --rc genhtml_function_coverage=1 00:17:19.936 --rc genhtml_legend=1 00:17:19.936 --rc geninfo_all_blocks=1 00:17:19.936 --rc geninfo_unexecuted_blocks=1 00:17:19.936 00:17:19.936 ' 00:17:19.936 12:01:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:19.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.936 --rc genhtml_branch_coverage=1 00:17:19.936 --rc genhtml_function_coverage=1 00:17:19.936 --rc genhtml_legend=1 00:17:19.936 --rc geninfo_all_blocks=1 00:17:19.936 --rc geninfo_unexecuted_blocks=1 00:17:19.936 00:17:19.936 ' 00:17:19.936 12:01:25 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:19.936 12:01:25 -- nvmf/common.sh@7 -- # uname -s 00:17:19.936 12:01:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.936 12:01:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.936 12:01:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.936 12:01:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.936 12:01:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.936 12:01:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.936 12:01:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.936 12:01:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.936 12:01:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.936 12:01:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.936 12:01:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:17:19.936 12:01:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:17:19.936 12:01:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.936 12:01:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.936 12:01:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:19.936 12:01:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:19.936 12:01:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.936 12:01:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.936 12:01:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.936 12:01:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.936 12:01:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.936 12:01:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.936 12:01:25 -- paths/export.sh@5 -- # export PATH 00:17:19.936 12:01:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.936 12:01:25 -- nvmf/common.sh@46 -- # : 0 00:17:19.936 12:01:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:19.936 12:01:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:19.936 12:01:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:19.936 12:01:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.936 12:01:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.936 12:01:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:19.936 12:01:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:19.936 12:01:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:19.936 12:01:25 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:19.936 12:01:25 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:19.936 12:01:25 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:17:19.936 12:01:25 -- target/multiconnection.sh@16 -- # nvmftestinit 00:17:19.936 12:01:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:19.936 12:01:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.936 12:01:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:19.936 12:01:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:19.936 12:01:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:19.936 12:01:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.936 12:01:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.936 12:01:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.936 12:01:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:19.936 12:01:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:19.936 12:01:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:19.936 12:01:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:19.936 12:01:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:19.936 12:01:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:19.936 12:01:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.936 12:01:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.936 12:01:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:19.936 12:01:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:19.936 12:01:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:19.936 12:01:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:19.936 12:01:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:19.936 12:01:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.936 12:01:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:19.936 12:01:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:19.936 12:01:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:19.936 12:01:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:19.936 12:01:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:19.936 12:01:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:19.936 Cannot find device "nvmf_tgt_br" 00:17:19.936 12:01:25 -- nvmf/common.sh@154 -- # true 00:17:19.936 12:01:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:19.936 Cannot find device "nvmf_tgt_br2" 00:17:19.936 12:01:25 -- nvmf/common.sh@155 -- # true 00:17:19.936 12:01:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:19.936 12:01:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:19.936 Cannot find device "nvmf_tgt_br" 00:17:19.936 12:01:25 -- nvmf/common.sh@157 -- # true 00:17:19.936 12:01:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:19.936 Cannot find device "nvmf_tgt_br2" 00:17:19.936 12:01:25 -- nvmf/common.sh@158 -- # true 00:17:19.936 12:01:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:19.936 12:01:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:19.936 12:01:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:19.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.936 12:01:25 -- nvmf/common.sh@161 -- # true 00:17:19.936 12:01:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:19.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.936 12:01:25 -- nvmf/common.sh@162 -- # true 00:17:19.936 12:01:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:19.936 12:01:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:19.936 12:01:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:19.936 12:01:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:19.936 12:01:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:19.936 12:01:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:19.936 12:01:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:19.936 12:01:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:19.936 12:01:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:19.937 12:01:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:19.937 12:01:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:20.196 12:01:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:20.196 12:01:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:20.196 12:01:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.196 12:01:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.196 12:01:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.196 12:01:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:20.196 12:01:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:20.196 12:01:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.197 12:01:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.197 12:01:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.197 12:01:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.197 12:01:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.197 12:01:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:20.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:17:20.197 00:17:20.197 --- 10.0.0.2 ping statistics --- 00:17:20.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.197 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:20.197 12:01:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:20.197 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.197 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:17:20.197 00:17:20.197 --- 10.0.0.3 ping statistics --- 00:17:20.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.197 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:20.197 12:01:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:17:20.197 00:17:20.197 --- 10.0.0.1 ping statistics --- 00:17:20.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.197 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:20.197 12:01:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.197 12:01:25 -- nvmf/common.sh@421 -- # return 0 00:17:20.197 12:01:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:20.197 12:01:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.197 12:01:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:20.197 12:01:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:20.197 12:01:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.197 12:01:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:20.197 12:01:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:20.197 12:01:25 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:17:20.197 12:01:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:20.197 12:01:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.197 12:01:25 -- common/autotest_common.sh@10 -- # set +x 00:17:20.197 12:01:25 -- nvmf/common.sh@469 -- # nvmfpid=78863 00:17:20.197 12:01:25 -- nvmf/common.sh@470 -- # waitforlisten 78863 00:17:20.197 12:01:25 -- common/autotest_common.sh@829 -- # '[' -z 78863 ']' 00:17:20.197 12:01:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.197 12:01:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:20.197 12:01:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.197 12:01:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.197 12:01:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.197 12:01:25 -- common/autotest_common.sh@10 -- # set +x 00:17:20.197 [2024-11-29 12:01:25.643020] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:20.197 [2024-11-29 12:01:25.643152] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.455 [2024-11-29 12:01:25.780515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.455 [2024-11-29 12:01:25.903936] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:20.455 [2024-11-29 12:01:25.904333] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.455 [2024-11-29 12:01:25.904366] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.455 [2024-11-29 12:01:25.904377] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.455 [2024-11-29 12:01:25.904627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.455 [2024-11-29 12:01:25.904724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.455 [2024-11-29 12:01:25.908556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.455 [2024-11-29 12:01:25.908575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.389 12:01:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.389 12:01:26 -- common/autotest_common.sh@862 -- # return 0 00:17:21.389 12:01:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:21.389 12:01:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:21.389 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.389 12:01:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.389 12:01:26 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:21.389 12:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.389 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.389 [2024-11-29 12:01:26.751759] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.389 12:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.389 12:01:26 -- target/multiconnection.sh@21 -- # seq 1 11 00:17:21.389 12:01:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:21.389 12:01:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:21.389 12:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.389 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.389 Malloc1 00:17:21.389 12:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.389 12:01:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:17:21.389 12:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.389 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.390 12:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.390 12:01:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:21.390 12:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.390 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.390 12:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.390 12:01:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.390 12:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.390 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.390 [2024-11-29 12:01:26.843865] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.390 12:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.390 12:01:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:21.390 12:01:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:21.390 12:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.390 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.390 Malloc2 00:17:21.390 12:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.390 12:01:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:21.390 12:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.390 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.390 12:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.390 12:01:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:21.390 12:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.390 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.390 12:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.390 12:01:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:21.390 12:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.390 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 12:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:21.649 12:01:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:21.649 12:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 Malloc3 00:17:21.649 12:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:17:21.649 12:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 12:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:21.649 12:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 12:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:21.649 12:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 12:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:21.649 12:01:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:21.649 12:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 Malloc4 00:17:21.649 12:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:17:21.649 12:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:21.649 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:21.649 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:21.649 12:01:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:21.649 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 Malloc5 00:17:21.649 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:17:21.649 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:21.649 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:17:21.649 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:21.649 12:01:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:17:21.649 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 Malloc6 00:17:21.649 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:17:21.649 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:17:21.649 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:17:21.649 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:21.649 12:01:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:17:21.649 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.649 Malloc7 00:17:21.649 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.649 12:01:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:17:21.649 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.649 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.909 12:01:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:17:21.909 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.909 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.909 12:01:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:17:21.909 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.909 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.909 12:01:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:21.909 12:01:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:17:21.909 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.909 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 Malloc8 00:17:21.909 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.909 12:01:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:17:21.909 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.909 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.909 12:01:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:17:21.909 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.909 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.909 12:01:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:17:21.909 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.909 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.909 12:01:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:21.909 12:01:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:17:21.909 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.909 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 Malloc9 00:17:21.909 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.909 12:01:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:17:21.909 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.909 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.909 12:01:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:17:21.909 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.909 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.909 12:01:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:17:21.909 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.909 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.909 12:01:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:21.909 12:01:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:17:21.909 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.909 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 Malloc10 00:17:21.909 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.909 12:01:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:17:21.909 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.909 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.909 12:01:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:17:21.909 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.909 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.909 12:01:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:17:21.909 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.909 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.910 12:01:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:21.910 12:01:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:17:21.910 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.910 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.910 Malloc11 00:17:21.910 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.910 12:01:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:17:21.910 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.910 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.910 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.910 12:01:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:17:21.910 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.910 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.910 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.910 12:01:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:17:21.910 12:01:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.910 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.910 12:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.910 12:01:27 -- target/multiconnection.sh@28 -- # seq 1 11 00:17:21.910 12:01:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:21.910 12:01:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:22.168 12:01:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:17:22.168 12:01:27 -- common/autotest_common.sh@1187 -- # local i=0 00:17:22.168 12:01:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:22.168 12:01:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:22.168 12:01:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:24.079 12:01:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:24.079 12:01:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:24.079 12:01:29 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:17:24.079 12:01:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:24.079 12:01:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:24.079 12:01:29 -- common/autotest_common.sh@1197 -- # return 0 00:17:24.079 12:01:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:24.079 12:01:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:17:24.348 12:01:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:17:24.348 12:01:29 -- common/autotest_common.sh@1187 -- # local i=0 00:17:24.348 12:01:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:24.348 12:01:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:24.348 12:01:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:26.306 12:01:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:26.306 12:01:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:26.306 12:01:31 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:17:26.306 12:01:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:26.306 12:01:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:26.306 12:01:31 -- common/autotest_common.sh@1197 -- # return 0 00:17:26.306 12:01:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:26.306 12:01:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:17:26.565 12:01:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:17:26.565 12:01:31 -- common/autotest_common.sh@1187 -- # local i=0 00:17:26.565 12:01:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.565 12:01:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:26.565 12:01:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:28.471 12:01:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:28.471 12:01:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:28.471 12:01:33 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:17:28.471 12:01:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:28.471 12:01:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:28.471 12:01:33 -- common/autotest_common.sh@1197 -- # return 0 00:17:28.471 12:01:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:28.471 12:01:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:17:28.730 12:01:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:17:28.730 12:01:34 -- common/autotest_common.sh@1187 -- # local i=0 00:17:28.730 12:01:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:28.730 12:01:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:28.730 12:01:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:30.633 12:01:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:30.633 12:01:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:30.633 12:01:36 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:17:30.633 12:01:36 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:30.633 12:01:36 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:30.633 12:01:36 -- common/autotest_common.sh@1197 -- # return 0 00:17:30.633 12:01:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:30.633 12:01:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:17:30.892 12:01:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:17:30.892 12:01:36 -- common/autotest_common.sh@1187 -- # local i=0 00:17:30.892 12:01:36 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:30.892 12:01:36 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:30.892 12:01:36 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:32.792 12:01:38 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:32.792 12:01:38 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:32.792 12:01:38 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:17:32.792 12:01:38 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:32.792 12:01:38 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:32.792 12:01:38 -- common/autotest_common.sh@1197 -- # return 0 00:17:32.792 12:01:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:32.792 12:01:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:17:33.050 12:01:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:17:33.050 12:01:38 -- common/autotest_common.sh@1187 -- # local i=0 00:17:33.050 12:01:38 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:33.050 12:01:38 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:33.050 12:01:38 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:34.954 12:01:40 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:34.954 12:01:40 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:34.954 12:01:40 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:17:34.954 12:01:40 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:34.954 12:01:40 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:34.954 12:01:40 -- common/autotest_common.sh@1197 -- # return 0 00:17:34.954 12:01:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:34.954 12:01:40 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:17:35.213 12:01:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:17:35.213 12:01:40 -- common/autotest_common.sh@1187 -- # local i=0 00:17:35.213 12:01:40 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:35.213 12:01:40 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:35.213 12:01:40 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:37.134 12:01:42 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:37.134 12:01:42 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:37.134 12:01:42 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:17:37.134 12:01:42 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:37.134 12:01:42 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:37.134 12:01:42 -- common/autotest_common.sh@1197 -- # return 0 00:17:37.134 12:01:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:37.134 12:01:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:17:37.393 12:01:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:17:37.393 12:01:42 -- common/autotest_common.sh@1187 -- # local i=0 00:17:37.393 12:01:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:37.393 12:01:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:37.393 12:01:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:39.299 12:01:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:39.299 12:01:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:39.299 12:01:44 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:17:39.299 12:01:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:39.299 12:01:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:39.299 12:01:44 -- common/autotest_common.sh@1197 -- # return 0 00:17:39.299 12:01:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:39.299 12:01:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:17:39.557 12:01:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:17:39.557 12:01:44 -- common/autotest_common.sh@1187 -- # local i=0 00:17:39.557 12:01:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:39.557 12:01:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:39.557 12:01:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:41.461 12:01:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:41.461 12:01:46 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:17:41.461 12:01:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:41.461 12:01:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:41.461 12:01:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:41.461 12:01:46 -- common/autotest_common.sh@1197 -- # return 0 00:17:41.461 12:01:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:41.461 12:01:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:17:41.720 12:01:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:17:41.720 12:01:47 -- common/autotest_common.sh@1187 -- # local i=0 00:17:41.720 12:01:47 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:41.720 12:01:47 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:41.720 12:01:47 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:43.622 12:01:49 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:43.622 12:01:49 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:43.622 12:01:49 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:17:43.622 12:01:49 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:43.622 12:01:49 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:43.622 12:01:49 -- common/autotest_common.sh@1197 -- # return 0 00:17:43.622 12:01:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:43.622 12:01:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:17:43.881 12:01:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:17:43.881 12:01:49 -- common/autotest_common.sh@1187 -- # local i=0 00:17:43.881 12:01:49 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:43.881 12:01:49 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:43.881 12:01:49 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:45.786 12:01:51 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:45.786 12:01:51 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:45.786 12:01:51 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:17:46.045 12:01:51 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:46.045 12:01:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:46.045 12:01:51 -- common/autotest_common.sh@1197 -- # return 0 00:17:46.045 12:01:51 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:17:46.045 [global] 00:17:46.045 thread=1 00:17:46.045 invalidate=1 00:17:46.045 rw=read 00:17:46.045 time_based=1 00:17:46.045 runtime=10 00:17:46.045 ioengine=libaio 00:17:46.045 direct=1 00:17:46.045 bs=262144 00:17:46.045 iodepth=64 00:17:46.045 norandommap=1 00:17:46.045 numjobs=1 00:17:46.045 00:17:46.045 [job0] 00:17:46.045 filename=/dev/nvme0n1 00:17:46.045 [job1] 00:17:46.045 filename=/dev/nvme10n1 00:17:46.045 [job2] 00:17:46.045 filename=/dev/nvme1n1 00:17:46.045 [job3] 00:17:46.045 filename=/dev/nvme2n1 00:17:46.045 [job4] 00:17:46.045 filename=/dev/nvme3n1 00:17:46.045 [job5] 00:17:46.045 filename=/dev/nvme4n1 00:17:46.045 [job6] 00:17:46.045 filename=/dev/nvme5n1 00:17:46.045 [job7] 00:17:46.045 filename=/dev/nvme6n1 00:17:46.045 [job8] 00:17:46.045 filename=/dev/nvme7n1 00:17:46.045 [job9] 00:17:46.045 filename=/dev/nvme8n1 00:17:46.045 [job10] 00:17:46.045 filename=/dev/nvme9n1 00:17:46.045 Could not set queue depth (nvme0n1) 00:17:46.045 Could not set queue depth (nvme10n1) 00:17:46.045 Could not set queue depth (nvme1n1) 00:17:46.045 Could not set queue depth (nvme2n1) 00:17:46.045 Could not set queue depth (nvme3n1) 00:17:46.045 Could not set queue depth (nvme4n1) 00:17:46.045 Could not set queue depth (nvme5n1) 00:17:46.045 Could not set queue depth (nvme6n1) 00:17:46.045 Could not set queue depth (nvme7n1) 00:17:46.045 Could not set queue depth (nvme8n1) 00:17:46.045 Could not set queue depth (nvme9n1) 00:17:46.305 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:46.305 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:46.305 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:46.305 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:46.305 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:46.305 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:46.305 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:46.305 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:46.305 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:46.305 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:46.305 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:46.305 fio-3.35 00:17:46.305 Starting 11 threads 00:17:58.610 00:17:58.610 job0: (groupid=0, jobs=1): err= 0: pid=79322: Fri Nov 29 12:02:02 2024 00:17:58.610 read: IOPS=487, BW=122MiB/s (128MB/s)(1227MiB/10069msec) 00:17:58.610 slat (usec): min=18, max=47941, avg=2034.80, stdev=4638.48 00:17:58.610 clat (msec): min=48, max=203, avg=129.05, stdev=20.09 00:17:58.610 lat (msec): min=52, max=204, avg=131.08, stdev=20.33 00:17:58.610 clat percentiles (msec): 00:17:58.610 | 1.00th=[ 91], 5.00th=[ 104], 10.00th=[ 107], 20.00th=[ 113], 00:17:58.610 | 30.00th=[ 116], 40.00th=[ 122], 50.00th=[ 127], 60.00th=[ 133], 00:17:58.610 | 70.00th=[ 140], 80.00th=[ 148], 90.00th=[ 155], 95.00th=[ 163], 00:17:58.610 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 194], 99.95th=[ 201], 00:17:58.610 | 99.99th=[ 205] 00:17:58.610 bw ( KiB/s): min=98304, max=145117, per=8.54%, avg=123953.30, stdev=14749.89, samples=20 00:17:58.610 iops : min= 384, max= 566, avg=484.10, stdev=57.53, samples=20 00:17:58.610 lat (msec) : 50=0.02%, 100=2.55%, 250=97.43% 00:17:58.610 cpu : usr=0.23%, sys=2.22%, ctx=1101, majf=0, minf=4097 00:17:58.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:58.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:58.610 issued rwts: total=4907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.610 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:58.610 job1: (groupid=0, jobs=1): err= 0: pid=79323: Fri Nov 29 12:02:02 2024 00:17:58.610 read: IOPS=332, BW=83.1MiB/s (87.2MB/s)(844MiB/10145msec) 00:17:58.610 slat (usec): min=18, max=53741, avg=2934.93, stdev=7375.87 00:17:58.610 clat (msec): min=22, max=358, avg=189.19, stdev=40.01 00:17:58.610 lat (msec): min=23, max=358, avg=192.13, stdev=40.92 00:17:58.610 clat percentiles (msec): 00:17:58.610 | 1.00th=[ 70], 5.00th=[ 116], 10.00th=[ 142], 20.00th=[ 155], 00:17:58.610 | 30.00th=[ 176], 40.00th=[ 188], 50.00th=[ 197], 60.00th=[ 203], 00:17:58.610 | 70.00th=[ 211], 80.00th=[ 222], 90.00th=[ 232], 95.00th=[ 243], 00:17:58.610 | 99.00th=[ 264], 99.50th=[ 321], 99.90th=[ 359], 99.95th=[ 359], 00:17:58.610 | 99.99th=[ 359] 00:17:58.610 bw ( KiB/s): min=68096, max=125700, per=5.84%, avg=84729.85, stdev=15750.03, samples=20 00:17:58.610 iops : min= 266, max= 491, avg=330.95, stdev=61.50, samples=20 00:17:58.610 lat (msec) : 50=0.03%, 100=3.23%, 250=93.75%, 500=2.99% 00:17:58.610 cpu : usr=0.22%, sys=1.40%, ctx=785, majf=0, minf=4097 00:17:58.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:17:58.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:58.610 issued rwts: total=3374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.610 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:58.610 job2: (groupid=0, jobs=1): err= 0: pid=79324: Fri Nov 29 12:02:02 2024 00:17:58.610 read: IOPS=333, BW=83.3MiB/s (87.3MB/s)(845MiB/10148msec) 00:17:58.610 slat (usec): min=20, max=55916, avg=2927.47, stdev=6863.30 00:17:58.610 clat (msec): min=62, max=364, avg=188.80, stdev=37.99 00:17:58.610 lat (msec): min=62, max=364, avg=191.72, stdev=38.79 00:17:58.610 clat percentiles (msec): 00:17:58.610 | 1.00th=[ 83], 5.00th=[ 124], 10.00th=[ 138], 20.00th=[ 153], 00:17:58.610 | 30.00th=[ 174], 40.00th=[ 186], 50.00th=[ 197], 60.00th=[ 203], 00:17:58.610 | 70.00th=[ 209], 80.00th=[ 222], 90.00th=[ 232], 95.00th=[ 239], 00:17:58.610 | 99.00th=[ 271], 99.50th=[ 296], 99.90th=[ 355], 99.95th=[ 355], 00:17:58.610 | 99.99th=[ 368] 00:17:58.610 bw ( KiB/s): min=67584, max=120320, per=5.85%, avg=84929.35, stdev=15068.00, samples=20 00:17:58.610 iops : min= 264, max= 470, avg=331.75, stdev=58.85, samples=20 00:17:58.610 lat (msec) : 100=2.22%, 250=95.71%, 500=2.07% 00:17:58.610 cpu : usr=0.16%, sys=1.61%, ctx=816, majf=0, minf=4097 00:17:58.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:17:58.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:58.610 issued rwts: total=3381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.610 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:58.610 job3: (groupid=0, jobs=1): err= 0: pid=79325: Fri Nov 29 12:02:02 2024 00:17:58.610 read: IOPS=523, BW=131MiB/s (137MB/s)(1327MiB/10145msec) 00:17:58.610 slat (usec): min=20, max=173463, avg=1872.17, stdev=5658.48 00:17:58.610 clat (msec): min=5, max=346, avg=120.29, stdev=66.48 00:17:58.610 lat (msec): min=5, max=380, avg=122.16, stdev=67.51 00:17:58.610 clat percentiles (msec): 00:17:58.610 | 1.00th=[ 46], 5.00th=[ 66], 10.00th=[ 69], 20.00th=[ 72], 00:17:58.610 | 30.00th=[ 77], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 91], 00:17:58.610 | 70.00th=[ 127], 80.00th=[ 209], 90.00th=[ 228], 95.00th=[ 239], 00:17:58.610 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 347], 99.95th=[ 347], 00:17:58.610 | 99.99th=[ 347] 00:17:58.610 bw ( KiB/s): min=69120, max=230400, per=9.25%, avg=134253.80, stdev=64660.14, samples=20 00:17:58.610 iops : min= 270, max= 900, avg=524.40, stdev=252.56, samples=20 00:17:58.610 lat (msec) : 10=0.04%, 20=0.17%, 50=1.06%, 100=65.94%, 250=30.09% 00:17:58.610 lat (msec) : 500=2.71% 00:17:58.610 cpu : usr=0.24%, sys=2.09%, ctx=1162, majf=0, minf=4097 00:17:58.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:58.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:58.610 issued rwts: total=5308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.610 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:58.610 job4: (groupid=0, jobs=1): err= 0: pid=79326: Fri Nov 29 12:02:02 2024 00:17:58.610 read: IOPS=487, BW=122MiB/s (128MB/s)(1229MiB/10072msec) 00:17:58.610 slat (usec): min=18, max=32031, avg=2030.09, stdev=4512.88 00:17:58.610 clat (msec): min=26, max=205, avg=128.87, stdev=20.35 00:17:58.610 lat (msec): min=28, max=205, avg=130.90, stdev=20.60 00:17:58.610 clat percentiles (msec): 00:17:58.610 | 1.00th=[ 83], 5.00th=[ 102], 10.00th=[ 107], 20.00th=[ 112], 00:17:58.610 | 30.00th=[ 117], 40.00th=[ 122], 50.00th=[ 127], 60.00th=[ 133], 00:17:58.610 | 70.00th=[ 140], 80.00th=[ 146], 90.00th=[ 155], 95.00th=[ 163], 00:17:58.610 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 199], 99.95th=[ 205], 00:17:58.610 | 99.99th=[ 207] 00:17:58.610 bw ( KiB/s): min=98501, max=146432, per=8.56%, avg=124239.15, stdev=14788.49, samples=20 00:17:58.610 iops : min= 384, max= 572, avg=484.95, stdev=57.86, samples=20 00:17:58.610 lat (msec) : 50=0.22%, 100=3.70%, 250=96.07% 00:17:58.610 cpu : usr=0.32%, sys=2.05%, ctx=1129, majf=0, minf=4097 00:17:58.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:58.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:58.610 issued rwts: total=4915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.610 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:58.610 job5: (groupid=0, jobs=1): err= 0: pid=79327: Fri Nov 29 12:02:02 2024 00:17:58.610 read: IOPS=487, BW=122MiB/s (128MB/s)(1228MiB/10072msec) 00:17:58.610 slat (usec): min=18, max=45713, avg=2031.33, stdev=4680.15 00:17:58.610 clat (msec): min=31, max=203, avg=128.97, stdev=20.91 00:17:58.610 lat (msec): min=31, max=204, avg=131.00, stdev=21.12 00:17:58.610 clat percentiles (msec): 00:17:58.610 | 1.00th=[ 92], 5.00th=[ 103], 10.00th=[ 107], 20.00th=[ 111], 00:17:58.610 | 30.00th=[ 116], 40.00th=[ 121], 50.00th=[ 127], 60.00th=[ 132], 00:17:58.610 | 70.00th=[ 142], 80.00th=[ 148], 90.00th=[ 157], 95.00th=[ 165], 00:17:58.610 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 201], 99.95th=[ 201], 00:17:58.610 | 99.99th=[ 205] 00:17:58.610 bw ( KiB/s): min=97987, max=147456, per=8.55%, avg=124172.75, stdev=14902.80, samples=20 00:17:58.611 iops : min= 382, max= 576, avg=484.65, stdev=58.30, samples=20 00:17:58.611 lat (msec) : 50=0.33%, 100=2.85%, 250=96.82% 00:17:58.611 cpu : usr=0.30%, sys=2.06%, ctx=1116, majf=0, minf=4097 00:17:58.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:58.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:58.611 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.611 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:58.611 job6: (groupid=0, jobs=1): err= 0: pid=79328: Fri Nov 29 12:02:02 2024 00:17:58.611 read: IOPS=335, BW=84.0MiB/s (88.0MB/s)(853MiB/10153msec) 00:17:58.611 slat (usec): min=21, max=66550, avg=2929.86, stdev=7300.78 00:17:58.611 clat (msec): min=18, max=362, avg=187.34, stdev=44.33 00:17:58.611 lat (msec): min=18, max=362, avg=190.27, stdev=45.14 00:17:58.611 clat percentiles (msec): 00:17:58.611 | 1.00th=[ 68], 5.00th=[ 96], 10.00th=[ 138], 20.00th=[ 150], 00:17:58.611 | 30.00th=[ 171], 40.00th=[ 184], 50.00th=[ 194], 60.00th=[ 203], 00:17:58.611 | 70.00th=[ 211], 80.00th=[ 224], 90.00th=[ 236], 95.00th=[ 245], 00:17:58.611 | 99.00th=[ 266], 99.50th=[ 313], 99.90th=[ 363], 99.95th=[ 363], 00:17:58.611 | 99.99th=[ 363] 00:17:58.611 bw ( KiB/s): min=67206, max=150016, per=5.90%, avg=85680.70, stdev=19694.91, samples=20 00:17:58.611 iops : min= 262, max= 586, avg=334.45, stdev=77.00, samples=20 00:17:58.611 lat (msec) : 20=0.03%, 50=0.65%, 100=5.07%, 250=90.76%, 500=3.49% 00:17:58.611 cpu : usr=0.18%, sys=1.52%, ctx=804, majf=0, minf=4097 00:17:58.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:17:58.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:58.611 issued rwts: total=3410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.611 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:58.611 job7: (groupid=0, jobs=1): err= 0: pid=79329: Fri Nov 29 12:02:02 2024 00:17:58.611 read: IOPS=339, BW=84.8MiB/s (88.9MB/s)(861MiB/10151msec) 00:17:58.611 slat (usec): min=21, max=50613, avg=2904.30, stdev=6825.51 00:17:58.611 clat (msec): min=28, max=355, avg=185.49, stdev=44.11 00:17:58.611 lat (msec): min=28, max=355, avg=188.39, stdev=44.92 00:17:58.611 clat percentiles (msec): 00:17:58.611 | 1.00th=[ 60], 5.00th=[ 90], 10.00th=[ 136], 20.00th=[ 150], 00:17:58.611 | 30.00th=[ 171], 40.00th=[ 184], 50.00th=[ 194], 60.00th=[ 201], 00:17:58.611 | 70.00th=[ 211], 80.00th=[ 222], 90.00th=[ 232], 95.00th=[ 241], 00:17:58.611 | 99.00th=[ 264], 99.50th=[ 300], 99.90th=[ 347], 99.95th=[ 355], 00:17:58.611 | 99.99th=[ 355] 00:17:58.611 bw ( KiB/s): min=67584, max=153088, per=5.96%, avg=86500.60, stdev=19871.54, samples=20 00:17:58.611 iops : min= 264, max= 598, avg=337.65, stdev=77.69, samples=20 00:17:58.611 lat (msec) : 50=0.49%, 100=6.39%, 250=90.79%, 500=2.32% 00:17:58.611 cpu : usr=0.22%, sys=1.64%, ctx=795, majf=0, minf=4097 00:17:58.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:17:58.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:58.611 issued rwts: total=3442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.611 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:58.611 job8: (groupid=0, jobs=1): err= 0: pid=79330: Fri Nov 29 12:02:02 2024 00:17:58.611 read: IOPS=1391, BW=348MiB/s (365MB/s)(3483MiB/10015msec) 00:17:58.611 slat (usec): min=19, max=56200, avg=713.93, stdev=1782.00 00:17:58.611 clat (msec): min=11, max=159, avg=45.22, stdev=16.90 00:17:58.611 lat (msec): min=15, max=169, avg=45.93, stdev=17.15 00:17:58.611 clat percentiles (msec): 00:17:58.611 | 1.00th=[ 33], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 37], 00:17:58.611 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 40], 60.00th=[ 41], 00:17:58.611 | 70.00th=[ 42], 80.00th=[ 44], 90.00th=[ 77], 95.00th=[ 86], 00:17:58.611 | 99.00th=[ 104], 99.50th=[ 113], 99.90th=[ 140], 99.95th=[ 155], 00:17:58.611 | 99.99th=[ 159] 00:17:58.611 bw ( KiB/s): min=143647, max=437248, per=24.47%, avg=355274.50, stdev=98733.29, samples=20 00:17:58.611 iops : min= 561, max= 1708, avg=1387.65, stdev=385.72, samples=20 00:17:58.611 lat (msec) : 20=0.12%, 50=84.17%, 100=14.41%, 250=1.29% 00:17:58.611 cpu : usr=0.59%, sys=4.55%, ctx=2644, majf=0, minf=4097 00:17:58.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:58.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:58.611 issued rwts: total=13931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.611 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:58.611 job9: (groupid=0, jobs=1): err= 0: pid=79331: Fri Nov 29 12:02:02 2024 00:17:58.611 read: IOPS=590, BW=148MiB/s (155MB/s)(1488MiB/10069msec) 00:17:58.611 slat (usec): min=18, max=96237, avg=1675.24, stdev=4395.66 00:17:58.611 clat (msec): min=10, max=207, avg=106.41, stdev=37.72 00:17:58.611 lat (msec): min=10, max=221, avg=108.09, stdev=38.18 00:17:58.611 clat percentiles (msec): 00:17:58.611 | 1.00th=[ 35], 5.00th=[ 63], 10.00th=[ 70], 20.00th=[ 75], 00:17:58.611 | 30.00th=[ 79], 40.00th=[ 85], 50.00th=[ 92], 60.00th=[ 116], 00:17:58.611 | 70.00th=[ 138], 80.00th=[ 146], 90.00th=[ 159], 95.00th=[ 167], 00:17:58.611 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 209], 99.95th=[ 209], 00:17:58.611 | 99.99th=[ 209] 00:17:58.611 bw ( KiB/s): min=92857, max=271872, per=10.39%, avg=150776.00, stdev=52382.80, samples=20 00:17:58.611 iops : min= 362, max= 1062, avg=588.60, stdev=204.78, samples=20 00:17:58.611 lat (msec) : 20=0.34%, 50=3.63%, 100=52.84%, 250=43.19% 00:17:58.611 cpu : usr=0.35%, sys=2.18%, ctx=1260, majf=0, minf=4097 00:17:58.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:17:58.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:58.611 issued rwts: total=5950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.611 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:58.611 job10: (groupid=0, jobs=1): err= 0: pid=79332: Fri Nov 29 12:02:02 2024 00:17:58.611 read: IOPS=399, BW=99.8MiB/s (105MB/s)(1012MiB/10145msec) 00:17:58.611 slat (usec): min=21, max=110481, avg=2437.90, stdev=7132.44 00:17:58.611 clat (msec): min=20, max=358, avg=157.72, stdev=65.17 00:17:58.611 lat (msec): min=21, max=358, avg=160.15, stdev=66.33 00:17:58.611 clat percentiles (msec): 00:17:58.611 | 1.00th=[ 52], 5.00th=[ 71], 10.00th=[ 78], 20.00th=[ 84], 00:17:58.611 | 30.00th=[ 90], 40.00th=[ 125], 50.00th=[ 186], 60.00th=[ 197], 00:17:58.611 | 70.00th=[ 207], 80.00th=[ 218], 90.00th=[ 230], 95.00th=[ 243], 00:17:58.611 | 99.00th=[ 268], 99.50th=[ 296], 99.90th=[ 351], 99.95th=[ 351], 00:17:58.611 | 99.99th=[ 359] 00:17:58.611 bw ( KiB/s): min=68096, max=205312, per=7.03%, avg=102011.95, stdev=47004.88, samples=20 00:17:58.611 iops : min= 266, max= 802, avg=398.45, stdev=183.55, samples=20 00:17:58.611 lat (msec) : 50=0.64%, 100=37.15%, 250=59.24%, 500=2.96% 00:17:58.611 cpu : usr=0.19%, sys=1.69%, ctx=911, majf=0, minf=4097 00:17:58.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:17:58.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:58.611 issued rwts: total=4048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.611 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:58.611 00:17:58.611 Run status group 0 (all jobs): 00:17:58.611 READ: bw=1418MiB/s (1487MB/s), 83.1MiB/s-348MiB/s (87.2MB/s-365MB/s), io=14.1GiB (15.1GB), run=10015-10153msec 00:17:58.611 00:17:58.611 Disk stats (read/write): 00:17:58.611 nvme0n1: ios=9686/0, merge=0/0, ticks=1230711/0, in_queue=1230711, util=97.64% 00:17:58.611 nvme10n1: ios=6627/0, merge=0/0, ticks=1222351/0, in_queue=1222351, util=97.85% 00:17:58.611 nvme1n1: ios=6639/0, merge=0/0, ticks=1221830/0, in_queue=1221830, util=97.96% 00:17:58.611 nvme2n1: ios=10482/0, merge=0/0, ticks=1223987/0, in_queue=1223987, util=98.00% 00:17:58.611 nvme3n1: ios=9703/0, merge=0/0, ticks=1230416/0, in_queue=1230416, util=98.22% 00:17:58.611 nvme4n1: ios=9711/0, merge=0/0, ticks=1230885/0, in_queue=1230885, util=98.38% 00:17:58.611 nvme5n1: ios=6692/0, merge=0/0, ticks=1221694/0, in_queue=1221694, util=98.64% 00:17:58.611 nvme6n1: ios=6756/0, merge=0/0, ticks=1219994/0, in_queue=1219994, util=98.58% 00:17:58.611 nvme7n1: ios=27735/0, merge=0/0, ticks=1240800/0, in_queue=1240800, util=98.87% 00:17:58.611 nvme8n1: ios=11782/0, merge=0/0, ticks=1232688/0, in_queue=1232688, util=99.00% 00:17:58.611 nvme9n1: ios=7966/0, merge=0/0, ticks=1223456/0, in_queue=1223456, util=99.04% 00:17:58.611 12:02:02 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:17:58.611 [global] 00:17:58.611 thread=1 00:17:58.611 invalidate=1 00:17:58.611 rw=randwrite 00:17:58.611 time_based=1 00:17:58.611 runtime=10 00:17:58.611 ioengine=libaio 00:17:58.611 direct=1 00:17:58.611 bs=262144 00:17:58.611 iodepth=64 00:17:58.611 norandommap=1 00:17:58.611 numjobs=1 00:17:58.611 00:17:58.611 [job0] 00:17:58.611 filename=/dev/nvme0n1 00:17:58.611 [job1] 00:17:58.611 filename=/dev/nvme10n1 00:17:58.611 [job2] 00:17:58.611 filename=/dev/nvme1n1 00:17:58.611 [job3] 00:17:58.611 filename=/dev/nvme2n1 00:17:58.611 [job4] 00:17:58.611 filename=/dev/nvme3n1 00:17:58.611 [job5] 00:17:58.611 filename=/dev/nvme4n1 00:17:58.611 [job6] 00:17:58.611 filename=/dev/nvme5n1 00:17:58.611 [job7] 00:17:58.611 filename=/dev/nvme6n1 00:17:58.611 [job8] 00:17:58.611 filename=/dev/nvme7n1 00:17:58.611 [job9] 00:17:58.611 filename=/dev/nvme8n1 00:17:58.611 [job10] 00:17:58.611 filename=/dev/nvme9n1 00:17:58.611 Could not set queue depth (nvme0n1) 00:17:58.611 Could not set queue depth (nvme10n1) 00:17:58.611 Could not set queue depth (nvme1n1) 00:17:58.611 Could not set queue depth (nvme2n1) 00:17:58.611 Could not set queue depth (nvme3n1) 00:17:58.611 Could not set queue depth (nvme4n1) 00:17:58.611 Could not set queue depth (nvme5n1) 00:17:58.611 Could not set queue depth (nvme6n1) 00:17:58.611 Could not set queue depth (nvme7n1) 00:17:58.611 Could not set queue depth (nvme8n1) 00:17:58.612 Could not set queue depth (nvme9n1) 00:17:58.612 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:58.612 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:58.612 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:58.612 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:58.612 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:58.612 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:58.612 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:58.612 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:58.612 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:58.612 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:58.612 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:58.612 fio-3.35 00:17:58.612 Starting 11 threads 00:18:08.591 00:18:08.591 job0: (groupid=0, jobs=1): err= 0: pid=79537: Fri Nov 29 12:02:12 2024 00:18:08.591 write: IOPS=311, BW=77.9MiB/s (81.6MB/s)(786MiB/10096msec); 0 zone resets 00:18:08.591 slat (usec): min=19, max=32713, avg=3010.80, stdev=5580.90 00:18:08.591 clat (msec): min=14, max=291, avg=202.44, stdev=39.88 00:18:08.591 lat (msec): min=14, max=291, avg=205.45, stdev=40.38 00:18:08.591 clat percentiles (msec): 00:18:08.591 | 1.00th=[ 44], 5.00th=[ 118], 10.00th=[ 184], 20.00th=[ 197], 00:18:08.591 | 30.00th=[ 201], 40.00th=[ 205], 50.00th=[ 209], 60.00th=[ 211], 00:18:08.591 | 70.00th=[ 213], 80.00th=[ 218], 90.00th=[ 243], 95.00th=[ 255], 00:18:08.591 | 99.00th=[ 275], 99.50th=[ 284], 99.90th=[ 288], 99.95th=[ 292], 00:18:08.591 | 99.99th=[ 292] 00:18:08.591 bw ( KiB/s): min=61440, max=117013, per=6.89%, avg=78847.05, stdev=11143.05, samples=20 00:18:08.591 iops : min= 240, max= 457, avg=307.95, stdev=43.54, samples=20 00:18:08.591 lat (msec) : 20=0.45%, 50=1.02%, 100=2.54%, 250=88.99%, 500=7.00% 00:18:08.591 cpu : usr=0.85%, sys=0.73%, ctx=4141, majf=0, minf=1 00:18:08.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:08.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:08.592 issued rwts: total=0,3144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.592 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:08.592 job1: (groupid=0, jobs=1): err= 0: pid=79538: Fri Nov 29 12:02:12 2024 00:18:08.592 write: IOPS=293, BW=73.4MiB/s (77.0MB/s)(745MiB/10144msec); 0 zone resets 00:18:08.592 slat (usec): min=22, max=54506, avg=3349.99, stdev=5972.02 00:18:08.592 clat (msec): min=57, max=329, avg=214.41, stdev=28.07 00:18:08.592 lat (msec): min=57, max=329, avg=217.76, stdev=27.92 00:18:08.592 clat percentiles (msec): 00:18:08.592 | 1.00th=[ 130], 5.00th=[ 188], 10.00th=[ 197], 20.00th=[ 199], 00:18:08.592 | 30.00th=[ 203], 40.00th=[ 207], 50.00th=[ 211], 60.00th=[ 211], 00:18:08.592 | 70.00th=[ 213], 80.00th=[ 220], 90.00th=[ 259], 95.00th=[ 275], 00:18:08.592 | 99.00th=[ 288], 99.50th=[ 292], 99.90th=[ 317], 99.95th=[ 330], 00:18:08.592 | 99.99th=[ 330] 00:18:08.592 bw ( KiB/s): min=59392, max=82267, per=6.53%, avg=74667.55, stdev=7317.08, samples=20 00:18:08.592 iops : min= 232, max= 321, avg=291.60, stdev=28.62, samples=20 00:18:08.592 lat (msec) : 100=0.67%, 250=85.70%, 500=13.62% 00:18:08.592 cpu : usr=0.69%, sys=1.06%, ctx=2550, majf=0, minf=1 00:18:08.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:18:08.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:08.592 issued rwts: total=0,2980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.592 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:08.592 job2: (groupid=0, jobs=1): err= 0: pid=79550: Fri Nov 29 12:02:12 2024 00:18:08.592 write: IOPS=537, BW=134MiB/s (141MB/s)(1358MiB/10104msec); 0 zone resets 00:18:08.592 slat (usec): min=16, max=15138, avg=1810.68, stdev=3126.12 00:18:08.592 clat (msec): min=14, max=216, avg=117.21, stdev=13.39 00:18:08.592 lat (msec): min=14, max=216, avg=119.02, stdev=13.30 00:18:08.592 clat percentiles (msec): 00:18:08.592 | 1.00th=[ 57], 5.00th=[ 107], 10.00th=[ 110], 20.00th=[ 113], 00:18:08.592 | 30.00th=[ 114], 40.00th=[ 116], 50.00th=[ 117], 60.00th=[ 120], 00:18:08.592 | 70.00th=[ 121], 80.00th=[ 123], 90.00th=[ 124], 95.00th=[ 131], 00:18:08.592 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 203], 99.95th=[ 209], 00:18:08.592 | 99.99th=[ 218] 00:18:08.592 bw ( KiB/s): min=110592, max=147161, per=12.01%, avg=137391.80, stdev=7344.92, samples=20 00:18:08.592 iops : min= 432, max= 574, avg=536.60, stdev=28.60, samples=20 00:18:08.592 lat (msec) : 20=0.20%, 50=0.63%, 100=1.49%, 250=97.68% 00:18:08.592 cpu : usr=0.91%, sys=1.35%, ctx=7475, majf=0, minf=1 00:18:08.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:08.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:08.592 issued rwts: total=0,5431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.592 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:08.592 job3: (groupid=0, jobs=1): err= 0: pid=79551: Fri Nov 29 12:02:12 2024 00:18:08.592 write: IOPS=948, BW=237MiB/s (249MB/s)(2391MiB/10086msec); 0 zone resets 00:18:08.592 slat (usec): min=14, max=32385, avg=1022.72, stdev=1769.25 00:18:08.592 clat (usec): min=1838, max=203069, avg=66452.93, stdev=9735.28 00:18:08.592 lat (msec): min=3, max=227, avg=67.48, stdev= 9.78 00:18:08.592 clat percentiles (msec): 00:18:08.592 | 1.00th=[ 39], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 64], 00:18:08.592 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 67], 60.00th=[ 67], 00:18:08.592 | 70.00th=[ 68], 80.00th=[ 69], 90.00th=[ 70], 95.00th=[ 71], 00:18:08.592 | 99.00th=[ 101], 99.50th=[ 138], 99.90th=[ 180], 99.95th=[ 188], 00:18:08.592 | 99.99th=[ 203] 00:18:08.592 bw ( KiB/s): min=217088, max=255488, per=21.26%, avg=243200.00, stdev=8026.97, samples=20 00:18:08.592 iops : min= 848, max= 998, avg=950.00, stdev=31.36, samples=20 00:18:08.592 lat (msec) : 2=0.01%, 4=0.02%, 10=0.17%, 20=0.26%, 50=0.89% 00:18:08.592 lat (msec) : 100=97.65%, 250=1.00% 00:18:08.592 cpu : usr=1.56%, sys=2.06%, ctx=12706, majf=0, minf=1 00:18:08.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:08.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:08.592 issued rwts: total=0,9563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.592 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:08.592 job4: (groupid=0, jobs=1): err= 0: pid=79552: Fri Nov 29 12:02:12 2024 00:18:08.592 write: IOPS=319, BW=79.9MiB/s (83.8MB/s)(812MiB/10166msec); 0 zone resets 00:18:08.592 slat (usec): min=19, max=136400, avg=3073.91, stdev=5778.87 00:18:08.592 clat (msec): min=24, max=361, avg=197.05, stdev=27.57 00:18:08.592 lat (msec): min=24, max=361, avg=200.12, stdev=27.35 00:18:08.592 clat percentiles (msec): 00:18:08.592 | 1.00th=[ 148], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 176], 00:18:08.592 | 30.00th=[ 192], 40.00th=[ 199], 50.00th=[ 203], 60.00th=[ 207], 00:18:08.592 | 70.00th=[ 209], 80.00th=[ 213], 90.00th=[ 220], 95.00th=[ 226], 00:18:08.592 | 99.00th=[ 284], 99.50th=[ 313], 99.90th=[ 351], 99.95th=[ 363], 00:18:08.592 | 99.99th=[ 363] 00:18:08.592 bw ( KiB/s): min=73728, max=102400, per=7.13%, avg=81543.45, stdev=8171.87, samples=20 00:18:08.592 iops : min= 288, max= 400, avg=318.50, stdev=31.90, samples=20 00:18:08.592 lat (msec) : 50=0.37%, 100=0.12%, 250=97.26%, 500=2.25% 00:18:08.592 cpu : usr=0.70%, sys=0.86%, ctx=3625, majf=0, minf=1 00:18:08.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:08.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:08.592 issued rwts: total=0,3249,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.592 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:08.592 job5: (groupid=0, jobs=1): err= 0: pid=79554: Fri Nov 29 12:02:12 2024 00:18:08.592 write: IOPS=303, BW=75.8MiB/s (79.5MB/s)(769MiB/10148msec); 0 zone resets 00:18:08.592 slat (usec): min=19, max=21942, avg=3201.00, stdev=5653.05 00:18:08.592 clat (msec): min=17, max=334, avg=207.85, stdev=31.22 00:18:08.592 lat (msec): min=17, max=334, avg=211.05, stdev=31.33 00:18:08.592 clat percentiles (msec): 00:18:08.592 | 1.00th=[ 79], 5.00th=[ 155], 10.00th=[ 190], 20.00th=[ 199], 00:18:08.592 | 30.00th=[ 201], 40.00th=[ 207], 50.00th=[ 209], 60.00th=[ 211], 00:18:08.592 | 70.00th=[ 213], 80.00th=[ 218], 90.00th=[ 241], 95.00th=[ 257], 00:18:08.592 | 99.00th=[ 284], 99.50th=[ 292], 99.90th=[ 321], 99.95th=[ 334], 00:18:08.592 | 99.99th=[ 334] 00:18:08.592 bw ( KiB/s): min=63488, max=101684, per=6.74%, avg=77115.85, stdev=8030.46, samples=20 00:18:08.592 iops : min= 248, max= 397, avg=301.20, stdev=31.37, samples=20 00:18:08.592 lat (msec) : 20=0.13%, 50=0.39%, 100=1.24%, 250=90.93%, 500=7.31% 00:18:08.592 cpu : usr=0.71%, sys=0.75%, ctx=4500, majf=0, minf=1 00:18:08.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:08.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:08.592 issued rwts: total=0,3076,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.592 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:08.592 job6: (groupid=0, jobs=1): err= 0: pid=79555: Fri Nov 29 12:02:12 2024 00:18:08.592 write: IOPS=295, BW=73.9MiB/s (77.5MB/s)(750MiB/10146msec); 0 zone resets 00:18:08.592 slat (usec): min=19, max=41412, avg=3328.64, stdev=5866.37 00:18:08.592 clat (msec): min=20, max=331, avg=213.04, stdev=30.88 00:18:08.592 lat (msec): min=20, max=331, avg=216.37, stdev=30.85 00:18:08.592 clat percentiles (msec): 00:18:08.592 | 1.00th=[ 81], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 199], 00:18:08.592 | 30.00th=[ 203], 40.00th=[ 207], 50.00th=[ 211], 60.00th=[ 211], 00:18:08.593 | 70.00th=[ 213], 80.00th=[ 220], 90.00th=[ 257], 95.00th=[ 271], 00:18:08.593 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 321], 99.95th=[ 334], 00:18:08.593 | 99.99th=[ 334] 00:18:08.593 bw ( KiB/s): min=59273, max=82267, per=6.57%, avg=75173.00, stdev=6877.14, samples=20 00:18:08.593 iops : min= 231, max= 321, avg=293.60, stdev=26.91, samples=20 00:18:08.593 lat (msec) : 50=0.53%, 100=0.67%, 250=85.37%, 500=13.43% 00:18:08.593 cpu : usr=0.86%, sys=0.86%, ctx=3065, majf=0, minf=1 00:18:08.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:18:08.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:08.593 issued rwts: total=0,3000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.593 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:08.593 job7: (groupid=0, jobs=1): err= 0: pid=79556: Fri Nov 29 12:02:12 2024 00:18:08.593 write: IOPS=531, BW=133MiB/s (139MB/s)(1341MiB/10098msec); 0 zone resets 00:18:08.593 slat (usec): min=18, max=48607, avg=1859.42, stdev=3217.48 00:18:08.593 clat (msec): min=51, max=205, avg=118.59, stdev=10.52 00:18:08.593 lat (msec): min=51, max=205, avg=120.44, stdev=10.18 00:18:08.593 clat percentiles (msec): 00:18:08.593 | 1.00th=[ 106], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 113], 00:18:08.593 | 30.00th=[ 115], 40.00th=[ 116], 50.00th=[ 118], 60.00th=[ 120], 00:18:08.593 | 70.00th=[ 121], 80.00th=[ 123], 90.00th=[ 124], 95.00th=[ 134], 00:18:08.593 | 99.00th=[ 167], 99.50th=[ 178], 99.90th=[ 199], 99.95th=[ 201], 00:18:08.593 | 99.99th=[ 205] 00:18:08.593 bw ( KiB/s): min=102400, max=145408, per=11.86%, avg=135691.40, stdev=8677.79, samples=20 00:18:08.593 iops : min= 400, max= 568, avg=530.00, stdev=33.87, samples=20 00:18:08.593 lat (msec) : 100=0.54%, 250=99.46% 00:18:08.593 cpu : usr=1.03%, sys=1.70%, ctx=7653, majf=0, minf=1 00:18:08.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:08.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:08.593 issued rwts: total=0,5364,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.593 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:08.593 job8: (groupid=0, jobs=1): err= 0: pid=79558: Fri Nov 29 12:02:12 2024 00:18:08.593 write: IOPS=325, BW=81.3MiB/s (85.2MB/s)(826MiB/10161msec); 0 zone resets 00:18:08.593 slat (usec): min=23, max=79044, avg=3023.46, stdev=5364.34 00:18:08.593 clat (msec): min=81, max=354, avg=193.72, stdev=23.99 00:18:08.593 lat (msec): min=81, max=354, avg=196.74, stdev=23.77 00:18:08.593 clat percentiles (msec): 00:18:08.593 | 1.00th=[ 146], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 169], 00:18:08.593 | 30.00th=[ 190], 40.00th=[ 197], 50.00th=[ 201], 60.00th=[ 203], 00:18:08.593 | 70.00th=[ 207], 80.00th=[ 209], 90.00th=[ 213], 95.00th=[ 218], 00:18:08.593 | 99.00th=[ 257], 99.50th=[ 305], 99.90th=[ 342], 99.95th=[ 355], 00:18:08.593 | 99.99th=[ 355] 00:18:08.593 bw ( KiB/s): min=75776, max=102400, per=7.25%, avg=82961.55, stdev=7756.17, samples=20 00:18:08.593 iops : min= 296, max= 400, avg=324.05, stdev=30.30, samples=20 00:18:08.593 lat (msec) : 100=0.24%, 250=98.73%, 500=1.03% 00:18:08.593 cpu : usr=0.73%, sys=0.80%, ctx=4461, majf=0, minf=1 00:18:08.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:08.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:08.593 issued rwts: total=0,3304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.593 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:08.593 job9: (groupid=0, jobs=1): err= 0: pid=79559: Fri Nov 29 12:02:12 2024 00:18:08.593 write: IOPS=303, BW=75.9MiB/s (79.6MB/s)(773MiB/10176msec); 0 zone resets 00:18:08.593 slat (usec): min=20, max=39040, avg=3150.28, stdev=5676.99 00:18:08.593 clat (msec): min=10, max=367, avg=207.51, stdev=35.86 00:18:08.593 lat (msec): min=10, max=367, avg=210.66, stdev=36.07 00:18:08.593 clat percentiles (msec): 00:18:08.593 | 1.00th=[ 63], 5.00th=[ 169], 10.00th=[ 190], 20.00th=[ 197], 00:18:08.593 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 209], 00:18:08.593 | 70.00th=[ 211], 80.00th=[ 220], 90.00th=[ 255], 95.00th=[ 266], 00:18:08.593 | 99.00th=[ 284], 99.50th=[ 317], 99.90th=[ 355], 99.95th=[ 368], 00:18:08.593 | 99.99th=[ 368] 00:18:08.593 bw ( KiB/s): min=63361, max=93696, per=6.77%, avg=77470.05, stdev=6676.66, samples=20 00:18:08.593 iops : min= 247, max= 366, avg=302.55, stdev=26.17, samples=20 00:18:08.593 lat (msec) : 20=0.23%, 50=0.39%, 100=1.91%, 250=86.25%, 500=11.23% 00:18:08.593 cpu : usr=0.79%, sys=1.04%, ctx=3579, majf=0, minf=1 00:18:08.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:08.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:08.593 issued rwts: total=0,3090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.593 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:08.593 job10: (groupid=0, jobs=1): err= 0: pid=79560: Fri Nov 29 12:02:12 2024 00:18:08.593 write: IOPS=322, BW=80.5MiB/s (84.4MB/s)(818MiB/10161msec); 0 zone resets 00:18:08.593 slat (usec): min=18, max=126235, avg=3050.96, stdev=5672.75 00:18:08.593 clat (msec): min=128, max=355, avg=195.62, stdev=23.70 00:18:08.593 lat (msec): min=128, max=355, avg=198.67, stdev=23.41 00:18:08.593 clat percentiles (msec): 00:18:08.593 | 1.00th=[ 148], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 171], 00:18:08.593 | 30.00th=[ 192], 40.00th=[ 199], 50.00th=[ 201], 60.00th=[ 205], 00:18:08.593 | 70.00th=[ 209], 80.00th=[ 211], 90.00th=[ 215], 95.00th=[ 220], 00:18:08.593 | 99.00th=[ 268], 99.50th=[ 309], 99.90th=[ 342], 99.95th=[ 355], 00:18:08.593 | 99.99th=[ 355] 00:18:08.593 bw ( KiB/s): min=75776, max=102400, per=7.18%, avg=82150.05, stdev=7962.79, samples=20 00:18:08.593 iops : min= 296, max= 400, avg=320.85, stdev=31.13, samples=20 00:18:08.593 lat (msec) : 250=98.72%, 500=1.28% 00:18:08.593 cpu : usr=0.95%, sys=0.94%, ctx=3423, majf=0, minf=1 00:18:08.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:08.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:08.593 issued rwts: total=0,3272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.593 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:08.593 00:18:08.593 Run status group 0 (all jobs): 00:18:08.593 WRITE: bw=1117MiB/s (1171MB/s), 73.4MiB/s-237MiB/s (77.0MB/s-249MB/s), io=11.1GiB (11.9GB), run=10086-10176msec 00:18:08.593 00:18:08.593 Disk stats (read/write): 00:18:08.593 nvme0n1: ios=50/6114, merge=0/0, ticks=50/1215912, in_queue=1215962, util=97.77% 00:18:08.593 nvme10n1: ios=49/5804, merge=0/0, ticks=54/1206224, in_queue=1206278, util=97.82% 00:18:08.593 nvme1n1: ios=45/10695, merge=0/0, ticks=66/1212020, in_queue=1212086, util=97.96% 00:18:08.593 nvme2n1: ios=33/18942, merge=0/0, ticks=55/1212609, in_queue=1212664, util=97.89% 00:18:08.593 nvme3n1: ios=25/6347, merge=0/0, ticks=79/1205677, in_queue=1205756, util=97.89% 00:18:08.593 nvme4n1: ios=0/6000, merge=0/0, ticks=0/1208077, in_queue=1208077, util=98.14% 00:18:08.593 nvme5n1: ios=0/5845, merge=0/0, ticks=0/1206307, in_queue=1206307, util=98.21% 00:18:08.593 nvme6n1: ios=0/10535, merge=0/0, ticks=0/1209193, in_queue=1209193, util=98.15% 00:18:08.593 nvme7n1: ios=0/6449, merge=0/0, ticks=0/1204787, in_queue=1204787, util=98.45% 00:18:08.593 nvme8n1: ios=0/6037, merge=0/0, ticks=0/1209308, in_queue=1209308, util=98.88% 00:18:08.593 nvme9n1: ios=0/6384, merge=0/0, ticks=0/1205072, in_queue=1205072, util=98.67% 00:18:08.593 12:02:12 -- target/multiconnection.sh@36 -- # sync 00:18:08.593 12:02:12 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:08.593 12:02:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.593 12:02:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:08.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.594 12:02:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:08.594 12:02:13 -- common/autotest_common.sh@1208 -- # local i=0 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:18:08.594 12:02:13 -- common/autotest_common.sh@1220 -- # return 0 00:18:08.594 12:02:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:08.594 12:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.594 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:08.594 12:02:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.594 12:02:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.594 12:02:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:08.594 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:08.594 12:02:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:08.594 12:02:13 -- common/autotest_common.sh@1208 -- # local i=0 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:18:08.594 12:02:13 -- common/autotest_common.sh@1220 -- # return 0 00:18:08.594 12:02:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:08.594 12:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.594 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:08.594 12:02:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.594 12:02:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.594 12:02:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:08.594 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:08.594 12:02:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:08.594 12:02:13 -- common/autotest_common.sh@1208 -- # local i=0 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:18:08.594 12:02:13 -- common/autotest_common.sh@1220 -- # return 0 00:18:08.594 12:02:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:08.594 12:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.594 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:08.594 12:02:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.594 12:02:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.594 12:02:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:08.594 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:08.594 12:02:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:08.594 12:02:13 -- common/autotest_common.sh@1208 -- # local i=0 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1220 -- # return 0 00:18:08.594 12:02:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:08.594 12:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.594 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:08.594 12:02:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.594 12:02:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.594 12:02:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:08.594 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:08.594 12:02:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:08.594 12:02:13 -- common/autotest_common.sh@1208 -- # local i=0 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1220 -- # return 0 00:18:08.594 12:02:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:08.594 12:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.594 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:08.594 12:02:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.594 12:02:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.594 12:02:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:08.594 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:08.594 12:02:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:08.594 12:02:13 -- common/autotest_common.sh@1208 -- # local i=0 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1220 -- # return 0 00:18:08.594 12:02:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:08.594 12:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.594 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:08.594 12:02:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.594 12:02:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.594 12:02:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:08.594 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:08.594 12:02:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:08.594 12:02:13 -- common/autotest_common.sh@1208 -- # local i=0 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:18:08.594 12:02:13 -- common/autotest_common.sh@1220 -- # return 0 00:18:08.594 12:02:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:08.594 12:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.594 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:08.594 12:02:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.594 12:02:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.594 12:02:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:08.594 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:08.594 12:02:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:08.594 12:02:13 -- common/autotest_common.sh@1208 -- # local i=0 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:08.594 12:02:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:18:08.594 12:02:13 -- common/autotest_common.sh@1220 -- # return 0 00:18:08.594 12:02:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:08.594 12:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.594 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:08.594 12:02:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.594 12:02:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.594 12:02:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:08.594 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:08.594 12:02:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:08.595 12:02:13 -- common/autotest_common.sh@1208 -- # local i=0 00:18:08.595 12:02:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:08.595 12:02:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:18:08.595 12:02:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:08.595 12:02:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:18:08.595 12:02:13 -- common/autotest_common.sh@1220 -- # return 0 00:18:08.595 12:02:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:08.595 12:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.595 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:08.595 12:02:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.595 12:02:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.595 12:02:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:08.595 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:08.595 12:02:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:08.595 12:02:13 -- common/autotest_common.sh@1208 -- # local i=0 00:18:08.595 12:02:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:08.595 12:02:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:18:08.595 12:02:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:08.595 12:02:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:18:08.595 12:02:13 -- common/autotest_common.sh@1220 -- # return 0 00:18:08.595 12:02:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:08.595 12:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.595 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:18:08.595 12:02:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.595 12:02:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.595 12:02:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:08.595 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:08.595 12:02:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:08.595 12:02:13 -- common/autotest_common.sh@1208 -- # local i=0 00:18:08.595 12:02:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:08.595 12:02:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:18:08.595 12:02:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:18:08.595 12:02:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:08.595 12:02:13 -- common/autotest_common.sh@1220 -- # return 0 00:18:08.595 12:02:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:08.595 12:02:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.595 12:02:14 -- common/autotest_common.sh@10 -- # set +x 00:18:08.595 12:02:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.595 12:02:14 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:08.595 12:02:14 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:08.595 12:02:14 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:08.595 12:02:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:08.595 12:02:14 -- nvmf/common.sh@116 -- # sync 00:18:08.595 12:02:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:08.595 12:02:14 -- nvmf/common.sh@119 -- # set +e 00:18:08.595 12:02:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:08.595 12:02:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:08.595 rmmod nvme_tcp 00:18:08.595 rmmod nvme_fabrics 00:18:08.595 rmmod nvme_keyring 00:18:08.595 12:02:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:08.595 12:02:14 -- nvmf/common.sh@123 -- # set -e 00:18:08.595 12:02:14 -- nvmf/common.sh@124 -- # return 0 00:18:08.595 12:02:14 -- nvmf/common.sh@477 -- # '[' -n 78863 ']' 00:18:08.595 12:02:14 -- nvmf/common.sh@478 -- # killprocess 78863 00:18:08.595 12:02:14 -- common/autotest_common.sh@936 -- # '[' -z 78863 ']' 00:18:08.595 12:02:14 -- common/autotest_common.sh@940 -- # kill -0 78863 00:18:08.595 12:02:14 -- common/autotest_common.sh@941 -- # uname 00:18:08.595 12:02:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.595 12:02:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78863 00:18:08.854 killing process with pid 78863 00:18:08.854 12:02:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:08.854 12:02:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:08.854 12:02:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78863' 00:18:08.854 12:02:14 -- common/autotest_common.sh@955 -- # kill 78863 00:18:08.854 12:02:14 -- common/autotest_common.sh@960 -- # wait 78863 00:18:09.433 12:02:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:09.433 12:02:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:09.433 12:02:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:09.433 12:02:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:09.434 12:02:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:09.434 12:02:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.434 12:02:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.434 12:02:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.434 12:02:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:09.434 00:18:09.434 real 0m49.700s 00:18:09.434 user 2m46.102s 00:18:09.434 sys 0m31.561s 00:18:09.434 12:02:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:09.434 ************************************ 00:18:09.434 END TEST nvmf_multiconnection 00:18:09.434 12:02:14 -- common/autotest_common.sh@10 -- # set +x 00:18:09.434 ************************************ 00:18:09.434 12:02:14 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:09.434 12:02:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:09.434 12:02:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:09.434 12:02:14 -- common/autotest_common.sh@10 -- # set +x 00:18:09.434 ************************************ 00:18:09.434 START TEST nvmf_initiator_timeout 00:18:09.434 ************************************ 00:18:09.434 12:02:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:09.434 * Looking for test storage... 00:18:09.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:09.434 12:02:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:09.434 12:02:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:09.434 12:02:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:09.434 12:02:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:09.434 12:02:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:09.434 12:02:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:09.434 12:02:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:09.434 12:02:14 -- scripts/common.sh@335 -- # IFS=.-: 00:18:09.434 12:02:14 -- scripts/common.sh@335 -- # read -ra ver1 00:18:09.434 12:02:14 -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.434 12:02:14 -- scripts/common.sh@336 -- # read -ra ver2 00:18:09.434 12:02:14 -- scripts/common.sh@337 -- # local 'op=<' 00:18:09.434 12:02:14 -- scripts/common.sh@339 -- # ver1_l=2 00:18:09.434 12:02:14 -- scripts/common.sh@340 -- # ver2_l=1 00:18:09.434 12:02:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:09.434 12:02:14 -- scripts/common.sh@343 -- # case "$op" in 00:18:09.434 12:02:14 -- scripts/common.sh@344 -- # : 1 00:18:09.434 12:02:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:09.434 12:02:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.434 12:02:14 -- scripts/common.sh@364 -- # decimal 1 00:18:09.434 12:02:14 -- scripts/common.sh@352 -- # local d=1 00:18:09.434 12:02:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.434 12:02:14 -- scripts/common.sh@354 -- # echo 1 00:18:09.434 12:02:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:09.434 12:02:14 -- scripts/common.sh@365 -- # decimal 2 00:18:09.434 12:02:14 -- scripts/common.sh@352 -- # local d=2 00:18:09.434 12:02:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.434 12:02:14 -- scripts/common.sh@354 -- # echo 2 00:18:09.434 12:02:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:09.434 12:02:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:09.434 12:02:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:09.434 12:02:14 -- scripts/common.sh@367 -- # return 0 00:18:09.434 12:02:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.434 12:02:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:09.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.434 --rc genhtml_branch_coverage=1 00:18:09.434 --rc genhtml_function_coverage=1 00:18:09.434 --rc genhtml_legend=1 00:18:09.434 --rc geninfo_all_blocks=1 00:18:09.434 --rc geninfo_unexecuted_blocks=1 00:18:09.434 00:18:09.434 ' 00:18:09.434 12:02:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:09.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.434 --rc genhtml_branch_coverage=1 00:18:09.434 --rc genhtml_function_coverage=1 00:18:09.434 --rc genhtml_legend=1 00:18:09.434 --rc geninfo_all_blocks=1 00:18:09.434 --rc geninfo_unexecuted_blocks=1 00:18:09.434 00:18:09.434 ' 00:18:09.434 12:02:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:09.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.434 --rc genhtml_branch_coverage=1 00:18:09.434 --rc genhtml_function_coverage=1 00:18:09.434 --rc genhtml_legend=1 00:18:09.434 --rc geninfo_all_blocks=1 00:18:09.434 --rc geninfo_unexecuted_blocks=1 00:18:09.434 00:18:09.434 ' 00:18:09.434 12:02:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:09.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.434 --rc genhtml_branch_coverage=1 00:18:09.434 --rc genhtml_function_coverage=1 00:18:09.434 --rc genhtml_legend=1 00:18:09.434 --rc geninfo_all_blocks=1 00:18:09.434 --rc geninfo_unexecuted_blocks=1 00:18:09.434 00:18:09.434 ' 00:18:09.434 12:02:14 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:09.434 12:02:14 -- nvmf/common.sh@7 -- # uname -s 00:18:09.434 12:02:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.434 12:02:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.434 12:02:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.434 12:02:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.434 12:02:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.434 12:02:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.434 12:02:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.434 12:02:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.434 12:02:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.434 12:02:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.434 12:02:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:18:09.434 12:02:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:18:09.434 12:02:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.434 12:02:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.434 12:02:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:09.434 12:02:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:09.721 12:02:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.721 12:02:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.721 12:02:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.721 12:02:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.721 12:02:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.721 12:02:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.721 12:02:14 -- paths/export.sh@5 -- # export PATH 00:18:09.721 12:02:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.721 12:02:14 -- nvmf/common.sh@46 -- # : 0 00:18:09.721 12:02:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:09.721 12:02:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:09.721 12:02:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:09.721 12:02:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.721 12:02:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.721 12:02:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:09.721 12:02:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:09.721 12:02:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:09.721 12:02:14 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:09.721 12:02:14 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:09.721 12:02:14 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:09.721 12:02:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:09.721 12:02:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.721 12:02:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:09.721 12:02:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:09.721 12:02:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:09.721 12:02:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.721 12:02:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.721 12:02:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.721 12:02:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:09.721 12:02:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:09.721 12:02:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:09.721 12:02:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:09.721 12:02:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:09.721 12:02:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:09.721 12:02:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.721 12:02:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.721 12:02:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:09.721 12:02:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:09.721 12:02:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:09.721 12:02:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:09.721 12:02:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:09.721 12:02:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.721 12:02:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:09.721 12:02:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:09.721 12:02:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:09.721 12:02:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:09.721 12:02:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:09.721 12:02:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:09.721 Cannot find device "nvmf_tgt_br" 00:18:09.721 12:02:14 -- nvmf/common.sh@154 -- # true 00:18:09.721 12:02:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:09.721 Cannot find device "nvmf_tgt_br2" 00:18:09.721 12:02:14 -- nvmf/common.sh@155 -- # true 00:18:09.721 12:02:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:09.721 12:02:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:09.721 Cannot find device "nvmf_tgt_br" 00:18:09.721 12:02:15 -- nvmf/common.sh@157 -- # true 00:18:09.721 12:02:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:09.721 Cannot find device "nvmf_tgt_br2" 00:18:09.721 12:02:15 -- nvmf/common.sh@158 -- # true 00:18:09.721 12:02:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:09.721 12:02:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:09.721 12:02:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:09.721 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.721 12:02:15 -- nvmf/common.sh@161 -- # true 00:18:09.721 12:02:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:09.721 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.721 12:02:15 -- nvmf/common.sh@162 -- # true 00:18:09.721 12:02:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:09.721 12:02:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:09.721 12:02:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:09.721 12:02:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:09.721 12:02:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:09.721 12:02:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:09.721 12:02:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:09.721 12:02:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:09.721 12:02:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:09.721 12:02:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:09.721 12:02:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:09.721 12:02:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:09.721 12:02:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:09.721 12:02:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:09.721 12:02:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:09.721 12:02:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:09.721 12:02:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:09.721 12:02:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:09.979 12:02:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:09.979 12:02:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:09.979 12:02:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:09.979 12:02:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:09.979 12:02:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:09.979 12:02:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:09.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:18:09.979 00:18:09.979 --- 10.0.0.2 ping statistics --- 00:18:09.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.979 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:18:09.979 12:02:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:09.979 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:09.979 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:18:09.979 00:18:09.979 --- 10.0.0.3 ping statistics --- 00:18:09.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.979 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:09.979 12:02:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:09.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:18:09.979 00:18:09.979 --- 10.0.0.1 ping statistics --- 00:18:09.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.979 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:09.979 12:02:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.979 12:02:15 -- nvmf/common.sh@421 -- # return 0 00:18:09.979 12:02:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:09.979 12:02:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.979 12:02:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:09.979 12:02:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:09.979 12:02:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.979 12:02:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:09.979 12:02:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:09.979 12:02:15 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:09.979 12:02:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:09.979 12:02:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:09.979 12:02:15 -- common/autotest_common.sh@10 -- # set +x 00:18:09.979 12:02:15 -- nvmf/common.sh@469 -- # nvmfpid=79935 00:18:09.979 12:02:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:09.979 12:02:15 -- nvmf/common.sh@470 -- # waitforlisten 79935 00:18:09.979 12:02:15 -- common/autotest_common.sh@829 -- # '[' -z 79935 ']' 00:18:09.979 12:02:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.979 12:02:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.979 12:02:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.979 12:02:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.979 12:02:15 -- common/autotest_common.sh@10 -- # set +x 00:18:09.979 [2024-11-29 12:02:15.373326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:09.979 [2024-11-29 12:02:15.373434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.237 [2024-11-29 12:02:15.515544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:10.237 [2024-11-29 12:02:15.611011] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:10.237 [2024-11-29 12:02:15.611177] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.237 [2024-11-29 12:02:15.611192] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.237 [2024-11-29 12:02:15.611202] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.237 [2024-11-29 12:02:15.611306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.237 [2024-11-29 12:02:15.612271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.237 [2024-11-29 12:02:15.612496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:10.237 [2024-11-29 12:02:15.612528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.172 12:02:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.172 12:02:16 -- common/autotest_common.sh@862 -- # return 0 00:18:11.172 12:02:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:11.172 12:02:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:11.172 12:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:11.172 12:02:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.172 12:02:16 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:11.172 12:02:16 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:11.172 12:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.172 12:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:11.172 Malloc0 00:18:11.172 12:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.172 12:02:16 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:11.172 12:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.172 12:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:11.172 Delay0 00:18:11.172 12:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.172 12:02:16 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:11.172 12:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.172 12:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:11.172 [2024-11-29 12:02:16.533602] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.172 12:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.172 12:02:16 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:11.172 12:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.172 12:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:11.172 12:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.172 12:02:16 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:11.172 12:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.172 12:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:11.172 12:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.172 12:02:16 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.172 12:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.172 12:02:16 -- common/autotest_common.sh@10 -- # set +x 00:18:11.172 [2024-11-29 12:02:16.562750] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.172 12:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.172 12:02:16 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae --hostid=79493c5c-f53c-4dad-804b-85e045bfadae -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:11.432 12:02:16 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:11.432 12:02:16 -- common/autotest_common.sh@1187 -- # local i=0 00:18:11.432 12:02:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.432 12:02:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:11.432 12:02:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:13.334 12:02:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:13.334 12:02:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:13.334 12:02:18 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:18:13.334 12:02:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:13.334 12:02:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.334 12:02:18 -- common/autotest_common.sh@1197 -- # return 0 00:18:13.334 12:02:18 -- target/initiator_timeout.sh@35 -- # fio_pid=80005 00:18:13.334 12:02:18 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:13.334 12:02:18 -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:13.334 [global] 00:18:13.334 thread=1 00:18:13.334 invalidate=1 00:18:13.334 rw=write 00:18:13.334 time_based=1 00:18:13.334 runtime=60 00:18:13.334 ioengine=libaio 00:18:13.334 direct=1 00:18:13.334 bs=4096 00:18:13.334 iodepth=1 00:18:13.334 norandommap=0 00:18:13.334 numjobs=1 00:18:13.334 00:18:13.334 verify_dump=1 00:18:13.334 verify_backlog=512 00:18:13.334 verify_state_save=0 00:18:13.334 do_verify=1 00:18:13.334 verify=crc32c-intel 00:18:13.334 [job0] 00:18:13.334 filename=/dev/nvme0n1 00:18:13.334 Could not set queue depth (nvme0n1) 00:18:13.593 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:13.593 fio-3.35 00:18:13.593 Starting 1 thread 00:18:16.969 12:02:21 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:18:16.969 12:02:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.969 12:02:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.969 true 00:18:16.969 12:02:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.969 12:02:21 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:18:16.969 12:02:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.969 12:02:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.969 true 00:18:16.969 12:02:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.969 12:02:21 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:18:16.969 12:02:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.969 12:02:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.969 true 00:18:16.969 12:02:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.969 12:02:21 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:18:16.969 12:02:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.969 12:02:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.969 true 00:18:16.969 12:02:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.969 12:02:21 -- target/initiator_timeout.sh@45 -- # sleep 3 00:18:19.503 12:02:24 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:18:19.503 12:02:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.503 12:02:24 -- common/autotest_common.sh@10 -- # set +x 00:18:19.503 true 00:18:19.503 12:02:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.503 12:02:24 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:18:19.503 12:02:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.503 12:02:24 -- common/autotest_common.sh@10 -- # set +x 00:18:19.503 true 00:18:19.503 12:02:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.503 12:02:24 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:18:19.503 12:02:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.503 12:02:24 -- common/autotest_common.sh@10 -- # set +x 00:18:19.503 true 00:18:19.503 12:02:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.503 12:02:24 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:18:19.503 12:02:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.503 12:02:24 -- common/autotest_common.sh@10 -- # set +x 00:18:19.503 true 00:18:19.503 12:02:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.503 12:02:24 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:18:19.503 12:02:24 -- target/initiator_timeout.sh@54 -- # wait 80005 00:19:15.872 00:19:15.872 job0: (groupid=0, jobs=1): err= 0: pid=80026: Fri Nov 29 12:03:19 2024 00:19:15.872 read: IOPS=750, BW=3004KiB/s (3076kB/s)(176MiB/60000msec) 00:19:15.872 slat (usec): min=10, max=9637, avg=15.43, stdev=58.28 00:19:15.872 clat (usec): min=159, max=40489k, avg=1120.43, stdev=190746.56 00:19:15.872 lat (usec): min=171, max=40489k, avg=1135.86, stdev=190746.56 00:19:15.872 clat percentiles (usec): 00:19:15.872 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 198], 00:19:15.872 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:19:15.872 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 258], 95.00th=[ 269], 00:19:15.872 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 433], 99.95th=[ 685], 00:19:15.872 | 99.99th=[ 2245] 00:19:15.872 write: IOPS=752, BW=3009KiB/s (3081kB/s)(176MiB/60000msec); 0 zone resets 00:19:15.872 slat (usec): min=13, max=550, avg=22.46, stdev= 6.96 00:19:15.872 clat (usec): min=115, max=1661, avg=169.56, stdev=25.80 00:19:15.872 lat (usec): min=139, max=1681, avg=192.02, stdev=27.42 00:19:15.872 clat percentiles (usec): 00:19:15.872 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 151], 00:19:15.872 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 174], 00:19:15.872 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 208], 00:19:15.872 | 99.00th=[ 235], 99.50th=[ 247], 99.90th=[ 285], 99.95th=[ 437], 00:19:15.872 | 99.99th=[ 791] 00:19:15.872 bw ( KiB/s): min= 4096, max=11464, per=100.00%, avg=9037.08, stdev=1482.82, samples=39 00:19:15.872 iops : min= 1024, max= 2866, avg=2259.26, stdev=370.71, samples=39 00:19:15.872 lat (usec) : 250=92.72%, 500=7.22%, 750=0.04%, 1000=0.02% 00:19:15.872 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:19:15.872 cpu : usr=0.61%, sys=2.16%, ctx=90235, majf=0, minf=5 00:19:15.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.872 issued rwts: total=45056,45136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.872 00:19:15.872 Run status group 0 (all jobs): 00:19:15.872 READ: bw=3004KiB/s (3076kB/s), 3004KiB/s-3004KiB/s (3076kB/s-3076kB/s), io=176MiB (185MB), run=60000-60000msec 00:19:15.872 WRITE: bw=3009KiB/s (3081kB/s), 3009KiB/s-3009KiB/s (3081kB/s-3081kB/s), io=176MiB (185MB), run=60000-60000msec 00:19:15.872 00:19:15.872 Disk stats (read/write): 00:19:15.872 nvme0n1: ios=44935/45056, merge=0/0, ticks=10166/7918, in_queue=18084, util=99.79% 00:19:15.872 12:03:19 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:15.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:15.872 12:03:19 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:15.872 12:03:19 -- common/autotest_common.sh@1208 -- # local i=0 00:19:15.872 12:03:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:15.872 12:03:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:15.872 12:03:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:15.872 12:03:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:15.872 12:03:19 -- common/autotest_common.sh@1220 -- # return 0 00:19:15.872 12:03:19 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:19:15.872 nvmf hotplug test: fio successful as expected 00:19:15.872 12:03:19 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:19:15.872 12:03:19 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.872 12:03:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.872 12:03:19 -- common/autotest_common.sh@10 -- # set +x 00:19:15.872 12:03:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.872 12:03:19 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:19:15.872 12:03:19 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:19:15.872 12:03:19 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:19:15.872 12:03:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:15.872 12:03:19 -- nvmf/common.sh@116 -- # sync 00:19:15.872 12:03:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:15.872 12:03:19 -- nvmf/common.sh@119 -- # set +e 00:19:15.872 12:03:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:15.872 12:03:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:15.872 rmmod nvme_tcp 00:19:15.872 rmmod nvme_fabrics 00:19:15.872 rmmod nvme_keyring 00:19:15.872 12:03:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:15.872 12:03:19 -- nvmf/common.sh@123 -- # set -e 00:19:15.872 12:03:19 -- nvmf/common.sh@124 -- # return 0 00:19:15.872 12:03:19 -- nvmf/common.sh@477 -- # '[' -n 79935 ']' 00:19:15.873 12:03:19 -- nvmf/common.sh@478 -- # killprocess 79935 00:19:15.873 12:03:19 -- common/autotest_common.sh@936 -- # '[' -z 79935 ']' 00:19:15.873 12:03:19 -- common/autotest_common.sh@940 -- # kill -0 79935 00:19:15.873 12:03:19 -- common/autotest_common.sh@941 -- # uname 00:19:15.873 12:03:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:15.873 12:03:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79935 00:19:15.873 12:03:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:15.873 12:03:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:15.873 killing process with pid 79935 00:19:15.873 12:03:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79935' 00:19:15.873 12:03:19 -- common/autotest_common.sh@955 -- # kill 79935 00:19:15.873 12:03:19 -- common/autotest_common.sh@960 -- # wait 79935 00:19:15.873 12:03:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:15.873 12:03:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:15.873 12:03:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:15.873 12:03:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:15.873 12:03:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:15.873 12:03:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.873 12:03:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:15.873 12:03:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.873 12:03:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:15.873 00:19:15.873 real 1m4.836s 00:19:15.873 user 3m58.204s 00:19:15.873 sys 0m17.600s 00:19:15.873 12:03:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:15.873 ************************************ 00:19:15.873 END TEST nvmf_initiator_timeout 00:19:15.873 12:03:19 -- common/autotest_common.sh@10 -- # set +x 00:19:15.873 ************************************ 00:19:15.873 12:03:19 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:19:15.873 12:03:19 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:15.873 12:03:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:15.873 12:03:19 -- common/autotest_common.sh@10 -- # set +x 00:19:15.873 12:03:19 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:15.873 12:03:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:15.873 12:03:19 -- common/autotest_common.sh@10 -- # set +x 00:19:15.873 12:03:19 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:19:15.873 12:03:19 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:15.873 12:03:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:15.873 12:03:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:15.873 12:03:19 -- common/autotest_common.sh@10 -- # set +x 00:19:15.873 ************************************ 00:19:15.873 START TEST nvmf_identify 00:19:15.873 ************************************ 00:19:15.873 12:03:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:15.873 * Looking for test storage... 00:19:15.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:15.873 12:03:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:15.873 12:03:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:15.873 12:03:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:15.873 12:03:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:15.873 12:03:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:15.873 12:03:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:15.873 12:03:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:15.873 12:03:19 -- scripts/common.sh@335 -- # IFS=.-: 00:19:15.873 12:03:19 -- scripts/common.sh@335 -- # read -ra ver1 00:19:15.873 12:03:19 -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.873 12:03:19 -- scripts/common.sh@336 -- # read -ra ver2 00:19:15.873 12:03:19 -- scripts/common.sh@337 -- # local 'op=<' 00:19:15.873 12:03:19 -- scripts/common.sh@339 -- # ver1_l=2 00:19:15.873 12:03:19 -- scripts/common.sh@340 -- # ver2_l=1 00:19:15.873 12:03:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:15.873 12:03:19 -- scripts/common.sh@343 -- # case "$op" in 00:19:15.873 12:03:19 -- scripts/common.sh@344 -- # : 1 00:19:15.873 12:03:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:15.873 12:03:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.873 12:03:19 -- scripts/common.sh@364 -- # decimal 1 00:19:15.873 12:03:19 -- scripts/common.sh@352 -- # local d=1 00:19:15.873 12:03:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.873 12:03:19 -- scripts/common.sh@354 -- # echo 1 00:19:15.873 12:03:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:15.873 12:03:19 -- scripts/common.sh@365 -- # decimal 2 00:19:15.873 12:03:19 -- scripts/common.sh@352 -- # local d=2 00:19:15.873 12:03:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.873 12:03:19 -- scripts/common.sh@354 -- # echo 2 00:19:15.873 12:03:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:15.873 12:03:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:15.873 12:03:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:15.873 12:03:19 -- scripts/common.sh@367 -- # return 0 00:19:15.873 12:03:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.873 12:03:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:15.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.873 --rc genhtml_branch_coverage=1 00:19:15.873 --rc genhtml_function_coverage=1 00:19:15.873 --rc genhtml_legend=1 00:19:15.873 --rc geninfo_all_blocks=1 00:19:15.873 --rc geninfo_unexecuted_blocks=1 00:19:15.873 00:19:15.873 ' 00:19:15.873 12:03:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:15.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.873 --rc genhtml_branch_coverage=1 00:19:15.873 --rc genhtml_function_coverage=1 00:19:15.873 --rc genhtml_legend=1 00:19:15.873 --rc geninfo_all_blocks=1 00:19:15.873 --rc geninfo_unexecuted_blocks=1 00:19:15.873 00:19:15.873 ' 00:19:15.873 12:03:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:15.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.873 --rc genhtml_branch_coverage=1 00:19:15.873 --rc genhtml_function_coverage=1 00:19:15.873 --rc genhtml_legend=1 00:19:15.873 --rc geninfo_all_blocks=1 00:19:15.873 --rc geninfo_unexecuted_blocks=1 00:19:15.873 00:19:15.873 ' 00:19:15.873 12:03:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:15.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.873 --rc genhtml_branch_coverage=1 00:19:15.873 --rc genhtml_function_coverage=1 00:19:15.873 --rc genhtml_legend=1 00:19:15.873 --rc geninfo_all_blocks=1 00:19:15.873 --rc geninfo_unexecuted_blocks=1 00:19:15.873 00:19:15.873 ' 00:19:15.873 12:03:19 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:15.873 12:03:19 -- nvmf/common.sh@7 -- # uname -s 00:19:15.873 12:03:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.873 12:03:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.873 12:03:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.873 12:03:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.873 12:03:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.873 12:03:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.873 12:03:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.873 12:03:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.873 12:03:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.873 12:03:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.873 12:03:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:19:15.873 12:03:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:19:15.873 12:03:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.873 12:03:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.873 12:03:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:15.873 12:03:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:15.873 12:03:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.873 12:03:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.873 12:03:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.873 12:03:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.873 12:03:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.873 12:03:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.873 12:03:19 -- paths/export.sh@5 -- # export PATH 00:19:15.873 12:03:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.873 12:03:19 -- nvmf/common.sh@46 -- # : 0 00:19:15.873 12:03:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:15.873 12:03:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:15.873 12:03:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:15.873 12:03:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.873 12:03:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.873 12:03:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:15.873 12:03:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:15.873 12:03:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:15.873 12:03:19 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:15.873 12:03:19 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:15.874 12:03:19 -- host/identify.sh@14 -- # nvmftestinit 00:19:15.874 12:03:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:15.874 12:03:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.874 12:03:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:15.874 12:03:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:15.874 12:03:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:15.874 12:03:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.874 12:03:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:15.874 12:03:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.874 12:03:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:15.874 12:03:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:15.874 12:03:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:15.874 12:03:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:15.874 12:03:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:15.874 12:03:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:15.874 12:03:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.874 12:03:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:15.874 12:03:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:15.874 12:03:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:15.874 12:03:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:15.874 12:03:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:15.874 12:03:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:15.874 12:03:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.874 12:03:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:15.874 12:03:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:15.874 12:03:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:15.874 12:03:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:15.874 12:03:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:15.874 12:03:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:15.874 Cannot find device "nvmf_tgt_br" 00:19:15.874 12:03:19 -- nvmf/common.sh@154 -- # true 00:19:15.874 12:03:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:15.874 Cannot find device "nvmf_tgt_br2" 00:19:15.874 12:03:19 -- nvmf/common.sh@155 -- # true 00:19:15.874 12:03:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:15.874 12:03:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:15.874 Cannot find device "nvmf_tgt_br" 00:19:15.874 12:03:19 -- nvmf/common.sh@157 -- # true 00:19:15.874 12:03:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:15.874 Cannot find device "nvmf_tgt_br2" 00:19:15.874 12:03:19 -- nvmf/common.sh@158 -- # true 00:19:15.874 12:03:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:15.874 12:03:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:15.874 12:03:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.874 12:03:20 -- nvmf/common.sh@161 -- # true 00:19:15.874 12:03:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.874 12:03:20 -- nvmf/common.sh@162 -- # true 00:19:15.874 12:03:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:15.874 12:03:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:15.874 12:03:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:15.874 12:03:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:15.874 12:03:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:15.874 12:03:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:15.874 12:03:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:15.874 12:03:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:15.874 12:03:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:15.874 12:03:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:15.874 12:03:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:15.874 12:03:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:15.874 12:03:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:15.874 12:03:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:15.874 12:03:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:15.874 12:03:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:15.874 12:03:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:15.874 12:03:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:15.874 12:03:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:15.874 12:03:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:15.874 12:03:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:15.874 12:03:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:15.874 12:03:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:15.874 12:03:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:15.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:19:15.874 00:19:15.874 --- 10.0.0.2 ping statistics --- 00:19:15.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.874 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:19:15.874 12:03:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:15.874 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:15.874 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:19:15.874 00:19:15.874 --- 10.0.0.3 ping statistics --- 00:19:15.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.874 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:15.874 12:03:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:15.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:19:15.874 00:19:15.874 --- 10.0.0.1 ping statistics --- 00:19:15.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.874 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:15.874 12:03:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.874 12:03:20 -- nvmf/common.sh@421 -- # return 0 00:19:15.874 12:03:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:15.874 12:03:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:15.874 12:03:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:15.874 12:03:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:15.874 12:03:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:15.874 12:03:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:15.874 12:03:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:15.874 12:03:20 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:15.874 12:03:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:15.874 12:03:20 -- common/autotest_common.sh@10 -- # set +x 00:19:15.874 12:03:20 -- host/identify.sh@19 -- # nvmfpid=80872 00:19:15.874 12:03:20 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:15.874 12:03:20 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:15.874 12:03:20 -- host/identify.sh@23 -- # waitforlisten 80872 00:19:15.874 12:03:20 -- common/autotest_common.sh@829 -- # '[' -z 80872 ']' 00:19:15.874 12:03:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.874 12:03:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:15.874 12:03:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.874 12:03:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:15.874 12:03:20 -- common/autotest_common.sh@10 -- # set +x 00:19:15.874 [2024-11-29 12:03:20.305238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:15.874 [2024-11-29 12:03:20.305364] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.874 [2024-11-29 12:03:20.447225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:15.874 [2024-11-29 12:03:20.577079] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:15.874 [2024-11-29 12:03:20.577306] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.874 [2024-11-29 12:03:20.577323] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.874 [2024-11-29 12:03:20.577334] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.874 [2024-11-29 12:03:20.577474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.874 [2024-11-29 12:03:20.578038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.874 [2024-11-29 12:03:20.578158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:15.874 [2024-11-29 12:03:20.578164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.874 12:03:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.874 12:03:21 -- common/autotest_common.sh@862 -- # return 0 00:19:15.874 12:03:21 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:15.874 12:03:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.874 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:19:15.874 [2024-11-29 12:03:21.324616] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.874 12:03:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.874 12:03:21 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:15.874 12:03:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:15.874 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:19:15.874 12:03:21 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:15.874 12:03:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.874 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:19:16.133 Malloc0 00:19:16.133 12:03:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.133 12:03:21 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:16.133 12:03:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.133 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:19:16.133 12:03:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.133 12:03:21 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:16.133 12:03:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.133 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:19:16.133 12:03:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.133 12:03:21 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.133 12:03:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.133 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:19:16.133 [2024-11-29 12:03:21.434716] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.133 12:03:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.133 12:03:21 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:16.133 12:03:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.133 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:19:16.133 12:03:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.133 12:03:21 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:16.133 12:03:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.133 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:19:16.133 [2024-11-29 12:03:21.450419] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:16.133 [ 00:19:16.133 { 00:19:16.133 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:16.133 "subtype": "Discovery", 00:19:16.133 "listen_addresses": [ 00:19:16.133 { 00:19:16.133 "transport": "TCP", 00:19:16.133 "trtype": "TCP", 00:19:16.133 "adrfam": "IPv4", 00:19:16.133 "traddr": "10.0.0.2", 00:19:16.133 "trsvcid": "4420" 00:19:16.133 } 00:19:16.133 ], 00:19:16.133 "allow_any_host": true, 00:19:16.133 "hosts": [] 00:19:16.133 }, 00:19:16.133 { 00:19:16.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.133 "subtype": "NVMe", 00:19:16.133 "listen_addresses": [ 00:19:16.133 { 00:19:16.133 "transport": "TCP", 00:19:16.133 "trtype": "TCP", 00:19:16.133 "adrfam": "IPv4", 00:19:16.133 "traddr": "10.0.0.2", 00:19:16.133 "trsvcid": "4420" 00:19:16.133 } 00:19:16.133 ], 00:19:16.133 "allow_any_host": true, 00:19:16.133 "hosts": [], 00:19:16.133 "serial_number": "SPDK00000000000001", 00:19:16.133 "model_number": "SPDK bdev Controller", 00:19:16.133 "max_namespaces": 32, 00:19:16.133 "min_cntlid": 1, 00:19:16.133 "max_cntlid": 65519, 00:19:16.133 "namespaces": [ 00:19:16.133 { 00:19:16.133 "nsid": 1, 00:19:16.133 "bdev_name": "Malloc0", 00:19:16.133 "name": "Malloc0", 00:19:16.133 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:16.133 "eui64": "ABCDEF0123456789", 00:19:16.133 "uuid": "82c9f4ae-7567-43a9-a041-deb3ebe0d62b" 00:19:16.133 } 00:19:16.133 ] 00:19:16.133 } 00:19:16.133 ] 00:19:16.133 12:03:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.133 12:03:21 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:16.133 [2024-11-29 12:03:21.487639] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:16.133 [2024-11-29 12:03:21.487689] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80907 ] 00:19:16.133 [2024-11-29 12:03:21.628958] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:16.133 [2024-11-29 12:03:21.629049] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:16.133 [2024-11-29 12:03:21.629057] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:16.133 [2024-11-29 12:03:21.629076] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:16.133 [2024-11-29 12:03:21.629091] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:19:16.133 [2024-11-29 12:03:21.629259] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:16.133 [2024-11-29 12:03:21.629320] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4ee540 0 00:19:16.133 [2024-11-29 12:03:21.636534] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:16.133 [2024-11-29 12:03:21.636560] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:16.133 [2024-11-29 12:03:21.636567] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:16.133 [2024-11-29 12:03:21.636571] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:16.133 [2024-11-29 12:03:21.636625] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.133 [2024-11-29 12:03:21.636634] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.133 [2024-11-29 12:03:21.636638] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ee540) 00:19:16.133 [2024-11-29 12:03:21.636656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:16.133 [2024-11-29 12:03:21.636687] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527220, cid 0, qid 0 00:19:16.399 [2024-11-29 12:03:21.647548] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.399 [2024-11-29 12:03:21.647570] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.399 [2024-11-29 12:03:21.647575] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.399 [2024-11-29 12:03:21.647580] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527220) on tqpair=0x4ee540 00:19:16.399 [2024-11-29 12:03:21.647595] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:16.399 [2024-11-29 12:03:21.647605] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:16.399 [2024-11-29 12:03:21.647612] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:16.399 [2024-11-29 12:03:21.647631] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.399 [2024-11-29 12:03:21.647637] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.399 [2024-11-29 12:03:21.647641] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ee540) 00:19:16.399 [2024-11-29 12:03:21.647652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.399 [2024-11-29 12:03:21.647682] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527220, cid 0, qid 0 00:19:16.399 [2024-11-29 12:03:21.647763] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.399 [2024-11-29 12:03:21.647770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.399 [2024-11-29 12:03:21.647774] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.399 [2024-11-29 12:03:21.647778] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527220) on tqpair=0x4ee540 00:19:16.399 [2024-11-29 12:03:21.647784] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:16.399 [2024-11-29 12:03:21.647793] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:16.399 [2024-11-29 12:03:21.647801] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.399 [2024-11-29 12:03:21.647806] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.399 [2024-11-29 12:03:21.647810] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ee540) 00:19:16.399 [2024-11-29 12:03:21.647818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.399 [2024-11-29 12:03:21.647837] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527220, cid 0, qid 0 00:19:16.399 [2024-11-29 12:03:21.647895] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.399 [2024-11-29 12:03:21.647902] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.399 [2024-11-29 12:03:21.647906] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.399 [2024-11-29 12:03:21.647910] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527220) on tqpair=0x4ee540 00:19:16.399 [2024-11-29 12:03:21.647917] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:16.399 [2024-11-29 12:03:21.647926] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:16.399 [2024-11-29 12:03:21.647933] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.399 [2024-11-29 12:03:21.647938] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.399 [2024-11-29 12:03:21.647942] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ee540) 00:19:16.399 [2024-11-29 12:03:21.647949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.399 [2024-11-29 12:03:21.647967] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527220, cid 0, qid 0 00:19:16.399 [2024-11-29 12:03:21.648020] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.399 [2024-11-29 12:03:21.648027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.399 [2024-11-29 12:03:21.648031] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.399 [2024-11-29 12:03:21.648035] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527220) on tqpair=0x4ee540 00:19:16.399 [2024-11-29 12:03:21.648041] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:16.399 [2024-11-29 12:03:21.648052] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.399 [2024-11-29 12:03:21.648057] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.399 [2024-11-29 12:03:21.648061] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ee540) 00:19:16.399 [2024-11-29 12:03:21.648068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.400 [2024-11-29 12:03:21.648085] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527220, cid 0, qid 0 00:19:16.400 [2024-11-29 12:03:21.648138] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.400 [2024-11-29 12:03:21.648145] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.400 [2024-11-29 12:03:21.648149] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.648153] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527220) on tqpair=0x4ee540 00:19:16.400 [2024-11-29 12:03:21.648159] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:16.400 [2024-11-29 12:03:21.648164] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:16.400 [2024-11-29 12:03:21.648173] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:16.400 [2024-11-29 12:03:21.648279] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:16.400 [2024-11-29 12:03:21.648284] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:16.400 [2024-11-29 12:03:21.648294] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.648299] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.648303] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ee540) 00:19:16.400 [2024-11-29 12:03:21.648310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.400 [2024-11-29 12:03:21.648328] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527220, cid 0, qid 0 00:19:16.400 [2024-11-29 12:03:21.648390] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.400 [2024-11-29 12:03:21.648397] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.400 [2024-11-29 12:03:21.648401] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.648405] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527220) on tqpair=0x4ee540 00:19:16.400 [2024-11-29 12:03:21.648410] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:16.400 [2024-11-29 12:03:21.648420] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.648425] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.648429] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ee540) 00:19:16.400 [2024-11-29 12:03:21.648437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.400 [2024-11-29 12:03:21.648453] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527220, cid 0, qid 0 00:19:16.400 [2024-11-29 12:03:21.648536] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.400 [2024-11-29 12:03:21.648545] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.400 [2024-11-29 12:03:21.648549] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.648554] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527220) on tqpair=0x4ee540 00:19:16.400 [2024-11-29 12:03:21.648559] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:16.400 [2024-11-29 12:03:21.648565] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:16.400 [2024-11-29 12:03:21.648573] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:16.400 [2024-11-29 12:03:21.648592] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:16.400 [2024-11-29 12:03:21.648605] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.648609] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.648614] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ee540) 00:19:16.400 [2024-11-29 12:03:21.648622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.400 [2024-11-29 12:03:21.648644] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527220, cid 0, qid 0 00:19:16.400 [2024-11-29 12:03:21.648752] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:16.400 [2024-11-29 12:03:21.648759] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:16.400 [2024-11-29 12:03:21.648763] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.648768] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4ee540): datao=0, datal=4096, cccid=0 00:19:16.400 [2024-11-29 12:03:21.648773] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x527220) on tqpair(0x4ee540): expected_datao=0, payload_size=4096 00:19:16.400 [2024-11-29 12:03:21.648784] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.648789] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.648798] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.400 [2024-11-29 12:03:21.648805] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.400 [2024-11-29 12:03:21.648809] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.648813] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527220) on tqpair=0x4ee540 00:19:16.400 [2024-11-29 12:03:21.648823] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:16.400 [2024-11-29 12:03:21.648829] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:16.400 [2024-11-29 12:03:21.648834] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:16.400 [2024-11-29 12:03:21.648840] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:16.400 [2024-11-29 12:03:21.648845] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:16.400 [2024-11-29 12:03:21.648850] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:16.400 [2024-11-29 12:03:21.648866] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:16.400 [2024-11-29 12:03:21.648874] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.648879] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.648883] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ee540) 00:19:16.400 [2024-11-29 12:03:21.648891] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:16.400 [2024-11-29 12:03:21.648911] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527220, cid 0, qid 0 00:19:16.400 [2024-11-29 12:03:21.648986] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.400 [2024-11-29 12:03:21.648993] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.400 [2024-11-29 12:03:21.648997] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.649001] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527220) on tqpair=0x4ee540 00:19:16.400 [2024-11-29 12:03:21.649010] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.649014] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.649018] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4ee540) 00:19:16.400 [2024-11-29 12:03:21.649025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.400 [2024-11-29 12:03:21.649032] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.649036] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.649040] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4ee540) 00:19:16.400 [2024-11-29 12:03:21.649046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.400 [2024-11-29 12:03:21.649053] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.649057] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.649061] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4ee540) 00:19:16.400 [2024-11-29 12:03:21.649067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.400 [2024-11-29 12:03:21.649073] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.649077] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.649081] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.400 [2024-11-29 12:03:21.649087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.400 [2024-11-29 12:03:21.649093] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:16.400 [2024-11-29 12:03:21.649106] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:16.400 [2024-11-29 12:03:21.649115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.649119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.649123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4ee540) 00:19:16.400 [2024-11-29 12:03:21.649130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.400 [2024-11-29 12:03:21.649149] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527220, cid 0, qid 0 00:19:16.400 [2024-11-29 12:03:21.649157] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527380, cid 1, qid 0 00:19:16.400 [2024-11-29 12:03:21.649162] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5274e0, cid 2, qid 0 00:19:16.400 [2024-11-29 12:03:21.649167] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.400 [2024-11-29 12:03:21.649172] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5277a0, cid 4, qid 0 00:19:16.400 [2024-11-29 12:03:21.649284] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.400 [2024-11-29 12:03:21.649291] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.400 [2024-11-29 12:03:21.649295] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.400 [2024-11-29 12:03:21.649299] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5277a0) on tqpair=0x4ee540 00:19:16.400 [2024-11-29 12:03:21.649305] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:16.400 [2024-11-29 12:03:21.649311] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:16.401 [2024-11-29 12:03:21.649322] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.649327] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.649331] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4ee540) 00:19:16.401 [2024-11-29 12:03:21.649338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.401 [2024-11-29 12:03:21.652562] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5277a0, cid 4, qid 0 00:19:16.401 [2024-11-29 12:03:21.652634] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:16.401 [2024-11-29 12:03:21.652642] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:16.401 [2024-11-29 12:03:21.652646] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.652650] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4ee540): datao=0, datal=4096, cccid=4 00:19:16.401 [2024-11-29 12:03:21.652655] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5277a0) on tqpair(0x4ee540): expected_datao=0, payload_size=4096 00:19:16.401 [2024-11-29 12:03:21.652664] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.652669] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.652678] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.401 [2024-11-29 12:03:21.652684] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.401 [2024-11-29 12:03:21.652688] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.652692] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5277a0) on tqpair=0x4ee540 00:19:16.401 [2024-11-29 12:03:21.652710] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:16.401 [2024-11-29 12:03:21.652745] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.652752] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.652756] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4ee540) 00:19:16.401 [2024-11-29 12:03:21.652765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.401 [2024-11-29 12:03:21.652773] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.652777] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.652781] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4ee540) 00:19:16.401 [2024-11-29 12:03:21.652788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.401 [2024-11-29 12:03:21.652815] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5277a0, cid 4, qid 0 00:19:16.401 [2024-11-29 12:03:21.652822] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527900, cid 5, qid 0 00:19:16.401 [2024-11-29 12:03:21.652952] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:16.401 [2024-11-29 12:03:21.652959] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:16.401 [2024-11-29 12:03:21.652963] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.652967] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4ee540): datao=0, datal=1024, cccid=4 00:19:16.401 [2024-11-29 12:03:21.652972] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5277a0) on tqpair(0x4ee540): expected_datao=0, payload_size=1024 00:19:16.401 [2024-11-29 12:03:21.652981] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.652985] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.652991] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.401 [2024-11-29 12:03:21.652997] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.401 [2024-11-29 12:03:21.653001] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.653005] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527900) on tqpair=0x4ee540 00:19:16.401 [2024-11-29 12:03:21.653023] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.401 [2024-11-29 12:03:21.653031] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.401 [2024-11-29 12:03:21.653035] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.653039] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5277a0) on tqpair=0x4ee540 00:19:16.401 [2024-11-29 12:03:21.653067] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.653074] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.653078] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4ee540) 00:19:16.401 [2024-11-29 12:03:21.653086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.401 [2024-11-29 12:03:21.653111] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5277a0, cid 4, qid 0 00:19:16.401 [2024-11-29 12:03:21.653188] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:16.401 [2024-11-29 12:03:21.653195] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:16.401 [2024-11-29 12:03:21.653199] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.653203] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4ee540): datao=0, datal=3072, cccid=4 00:19:16.401 [2024-11-29 12:03:21.653208] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5277a0) on tqpair(0x4ee540): expected_datao=0, payload_size=3072 00:19:16.401 [2024-11-29 12:03:21.653216] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.653221] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.653229] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.401 [2024-11-29 12:03:21.653235] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.401 [2024-11-29 12:03:21.653239] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.653243] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5277a0) on tqpair=0x4ee540 00:19:16.401 [2024-11-29 12:03:21.653253] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.653257] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.653261] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4ee540) 00:19:16.401 [2024-11-29 12:03:21.653268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.401 [2024-11-29 12:03:21.653291] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5277a0, cid 4, qid 0 00:19:16.401 [2024-11-29 12:03:21.653378] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:16.401 [2024-11-29 12:03:21.653385] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:16.401 [2024-11-29 12:03:21.653389] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.653393] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4ee540): datao=0, datal=8, cccid=4 00:19:16.401 [2024-11-29 12:03:21.653398] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5277a0) on tqpair(0x4ee540): expected_datao=0, payload_size=8 00:19:16.401 [2024-11-29 12:03:21.653405] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.653409] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.653424] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.401 [2024-11-29 12:03:21.653431] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.401 [2024-11-29 12:03:21.653435] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.401 [2024-11-29 12:03:21.653440] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5277a0) on tqpair=0x4ee540 00:19:16.401 ===================================================== 00:19:16.401 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:16.401 ===================================================== 00:19:16.401 Controller Capabilities/Features 00:19:16.401 ================================ 00:19:16.401 Vendor ID: 0000 00:19:16.401 Subsystem Vendor ID: 0000 00:19:16.401 Serial Number: .................... 00:19:16.401 Model Number: ........................................ 00:19:16.401 Firmware Version: 24.01.1 00:19:16.401 Recommended Arb Burst: 0 00:19:16.401 IEEE OUI Identifier: 00 00 00 00:19:16.401 Multi-path I/O 00:19:16.401 May have multiple subsystem ports: No 00:19:16.401 May have multiple controllers: No 00:19:16.401 Associated with SR-IOV VF: No 00:19:16.401 Max Data Transfer Size: 131072 00:19:16.401 Max Number of Namespaces: 0 00:19:16.401 Max Number of I/O Queues: 1024 00:19:16.401 NVMe Specification Version (VS): 1.3 00:19:16.401 NVMe Specification Version (Identify): 1.3 00:19:16.401 Maximum Queue Entries: 128 00:19:16.401 Contiguous Queues Required: Yes 00:19:16.401 Arbitration Mechanisms Supported 00:19:16.401 Weighted Round Robin: Not Supported 00:19:16.401 Vendor Specific: Not Supported 00:19:16.401 Reset Timeout: 15000 ms 00:19:16.401 Doorbell Stride: 4 bytes 00:19:16.401 NVM Subsystem Reset: Not Supported 00:19:16.401 Command Sets Supported 00:19:16.401 NVM Command Set: Supported 00:19:16.401 Boot Partition: Not Supported 00:19:16.401 Memory Page Size Minimum: 4096 bytes 00:19:16.401 Memory Page Size Maximum: 4096 bytes 00:19:16.401 Persistent Memory Region: Not Supported 00:19:16.401 Optional Asynchronous Events Supported 00:19:16.401 Namespace Attribute Notices: Not Supported 00:19:16.401 Firmware Activation Notices: Not Supported 00:19:16.401 ANA Change Notices: Not Supported 00:19:16.401 PLE Aggregate Log Change Notices: Not Supported 00:19:16.401 LBA Status Info Alert Notices: Not Supported 00:19:16.401 EGE Aggregate Log Change Notices: Not Supported 00:19:16.401 Normal NVM Subsystem Shutdown event: Not Supported 00:19:16.401 Zone Descriptor Change Notices: Not Supported 00:19:16.401 Discovery Log Change Notices: Supported 00:19:16.401 Controller Attributes 00:19:16.401 128-bit Host Identifier: Not Supported 00:19:16.401 Non-Operational Permissive Mode: Not Supported 00:19:16.401 NVM Sets: Not Supported 00:19:16.401 Read Recovery Levels: Not Supported 00:19:16.401 Endurance Groups: Not Supported 00:19:16.401 Predictable Latency Mode: Not Supported 00:19:16.401 Traffic Based Keep ALive: Not Supported 00:19:16.401 Namespace Granularity: Not Supported 00:19:16.401 SQ Associations: Not Supported 00:19:16.402 UUID List: Not Supported 00:19:16.402 Multi-Domain Subsystem: Not Supported 00:19:16.402 Fixed Capacity Management: Not Supported 00:19:16.402 Variable Capacity Management: Not Supported 00:19:16.402 Delete Endurance Group: Not Supported 00:19:16.402 Delete NVM Set: Not Supported 00:19:16.402 Extended LBA Formats Supported: Not Supported 00:19:16.402 Flexible Data Placement Supported: Not Supported 00:19:16.402 00:19:16.402 Controller Memory Buffer Support 00:19:16.402 ================================ 00:19:16.402 Supported: No 00:19:16.402 00:19:16.402 Persistent Memory Region Support 00:19:16.402 ================================ 00:19:16.402 Supported: No 00:19:16.402 00:19:16.402 Admin Command Set Attributes 00:19:16.402 ============================ 00:19:16.402 Security Send/Receive: Not Supported 00:19:16.402 Format NVM: Not Supported 00:19:16.402 Firmware Activate/Download: Not Supported 00:19:16.402 Namespace Management: Not Supported 00:19:16.402 Device Self-Test: Not Supported 00:19:16.402 Directives: Not Supported 00:19:16.402 NVMe-MI: Not Supported 00:19:16.402 Virtualization Management: Not Supported 00:19:16.402 Doorbell Buffer Config: Not Supported 00:19:16.402 Get LBA Status Capability: Not Supported 00:19:16.402 Command & Feature Lockdown Capability: Not Supported 00:19:16.402 Abort Command Limit: 1 00:19:16.402 Async Event Request Limit: 4 00:19:16.402 Number of Firmware Slots: N/A 00:19:16.402 Firmware Slot 1 Read-Only: N/A 00:19:16.402 Firmware Activation Without Reset: N/A 00:19:16.402 Multiple Update Detection Support: N/A 00:19:16.402 Firmware Update Granularity: No Information Provided 00:19:16.402 Per-Namespace SMART Log: No 00:19:16.402 Asymmetric Namespace Access Log Page: Not Supported 00:19:16.402 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:16.402 Command Effects Log Page: Not Supported 00:19:16.402 Get Log Page Extended Data: Supported 00:19:16.402 Telemetry Log Pages: Not Supported 00:19:16.402 Persistent Event Log Pages: Not Supported 00:19:16.402 Supported Log Pages Log Page: May Support 00:19:16.402 Commands Supported & Effects Log Page: Not Supported 00:19:16.402 Feature Identifiers & Effects Log Page:May Support 00:19:16.402 NVMe-MI Commands & Effects Log Page: May Support 00:19:16.402 Data Area 4 for Telemetry Log: Not Supported 00:19:16.402 Error Log Page Entries Supported: 128 00:19:16.402 Keep Alive: Not Supported 00:19:16.402 00:19:16.402 NVM Command Set Attributes 00:19:16.402 ========================== 00:19:16.402 Submission Queue Entry Size 00:19:16.402 Max: 1 00:19:16.402 Min: 1 00:19:16.402 Completion Queue Entry Size 00:19:16.402 Max: 1 00:19:16.402 Min: 1 00:19:16.402 Number of Namespaces: 0 00:19:16.402 Compare Command: Not Supported 00:19:16.402 Write Uncorrectable Command: Not Supported 00:19:16.402 Dataset Management Command: Not Supported 00:19:16.402 Write Zeroes Command: Not Supported 00:19:16.402 Set Features Save Field: Not Supported 00:19:16.402 Reservations: Not Supported 00:19:16.402 Timestamp: Not Supported 00:19:16.402 Copy: Not Supported 00:19:16.402 Volatile Write Cache: Not Present 00:19:16.402 Atomic Write Unit (Normal): 1 00:19:16.402 Atomic Write Unit (PFail): 1 00:19:16.402 Atomic Compare & Write Unit: 1 00:19:16.402 Fused Compare & Write: Supported 00:19:16.402 Scatter-Gather List 00:19:16.402 SGL Command Set: Supported 00:19:16.402 SGL Keyed: Supported 00:19:16.402 SGL Bit Bucket Descriptor: Not Supported 00:19:16.402 SGL Metadata Pointer: Not Supported 00:19:16.402 Oversized SGL: Not Supported 00:19:16.402 SGL Metadata Address: Not Supported 00:19:16.402 SGL Offset: Supported 00:19:16.402 Transport SGL Data Block: Not Supported 00:19:16.402 Replay Protected Memory Block: Not Supported 00:19:16.402 00:19:16.402 Firmware Slot Information 00:19:16.402 ========================= 00:19:16.402 Active slot: 0 00:19:16.402 00:19:16.402 00:19:16.402 Error Log 00:19:16.402 ========= 00:19:16.402 00:19:16.402 Active Namespaces 00:19:16.402 ================= 00:19:16.402 Discovery Log Page 00:19:16.402 ================== 00:19:16.402 Generation Counter: 2 00:19:16.402 Number of Records: 2 00:19:16.402 Record Format: 0 00:19:16.402 00:19:16.402 Discovery Log Entry 0 00:19:16.402 ---------------------- 00:19:16.402 Transport Type: 3 (TCP) 00:19:16.402 Address Family: 1 (IPv4) 00:19:16.402 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:16.402 Entry Flags: 00:19:16.402 Duplicate Returned Information: 1 00:19:16.402 Explicit Persistent Connection Support for Discovery: 1 00:19:16.402 Transport Requirements: 00:19:16.402 Secure Channel: Not Required 00:19:16.402 Port ID: 0 (0x0000) 00:19:16.402 Controller ID: 65535 (0xffff) 00:19:16.402 Admin Max SQ Size: 128 00:19:16.402 Transport Service Identifier: 4420 00:19:16.402 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:16.402 Transport Address: 10.0.0.2 00:19:16.402 Discovery Log Entry 1 00:19:16.402 ---------------------- 00:19:16.402 Transport Type: 3 (TCP) 00:19:16.402 Address Family: 1 (IPv4) 00:19:16.402 Subsystem Type: 2 (NVM Subsystem) 00:19:16.402 Entry Flags: 00:19:16.402 Duplicate Returned Information: 0 00:19:16.402 Explicit Persistent Connection Support for Discovery: 0 00:19:16.402 Transport Requirements: 00:19:16.402 Secure Channel: Not Required 00:19:16.402 Port ID: 0 (0x0000) 00:19:16.402 Controller ID: 65535 (0xffff) 00:19:16.402 Admin Max SQ Size: 128 00:19:16.402 Transport Service Identifier: 4420 00:19:16.402 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:16.402 Transport Address: 10.0.0.2 [2024-11-29 12:03:21.653561] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:16.402 [2024-11-29 12:03:21.653580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.402 [2024-11-29 12:03:21.653588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.402 [2024-11-29 12:03:21.653595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.402 [2024-11-29 12:03:21.653602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.402 [2024-11-29 12:03:21.653612] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.402 [2024-11-29 12:03:21.653616] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.402 [2024-11-29 12:03:21.653620] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.402 [2024-11-29 12:03:21.653628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.402 [2024-11-29 12:03:21.653653] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.402 [2024-11-29 12:03:21.653722] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.402 [2024-11-29 12:03:21.653729] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.402 [2024-11-29 12:03:21.653733] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.402 [2024-11-29 12:03:21.653737] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.402 [2024-11-29 12:03:21.653745] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.402 [2024-11-29 12:03:21.653750] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.402 [2024-11-29 12:03:21.653754] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.402 [2024-11-29 12:03:21.653762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.402 [2024-11-29 12:03:21.653783] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.402 [2024-11-29 12:03:21.653868] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.402 [2024-11-29 12:03:21.653875] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.402 [2024-11-29 12:03:21.653879] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.402 [2024-11-29 12:03:21.653883] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.402 [2024-11-29 12:03:21.653889] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:16.402 [2024-11-29 12:03:21.653894] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:16.402 [2024-11-29 12:03:21.653905] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.402 [2024-11-29 12:03:21.653909] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.402 [2024-11-29 12:03:21.653913] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.402 [2024-11-29 12:03:21.653921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.402 [2024-11-29 12:03:21.653937] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.402 [2024-11-29 12:03:21.654000] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.402 [2024-11-29 12:03:21.654006] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.403 [2024-11-29 12:03:21.654010] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654015] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.403 [2024-11-29 12:03:21.654026] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654030] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654034] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.403 [2024-11-29 12:03:21.654042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.403 [2024-11-29 12:03:21.654058] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.403 [2024-11-29 12:03:21.654123] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.403 [2024-11-29 12:03:21.654130] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.403 [2024-11-29 12:03:21.654134] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654138] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.403 [2024-11-29 12:03:21.654148] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654153] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654157] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.403 [2024-11-29 12:03:21.654164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.403 [2024-11-29 12:03:21.654180] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.403 [2024-11-29 12:03:21.654243] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.403 [2024-11-29 12:03:21.654249] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.403 [2024-11-29 12:03:21.654253] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654257] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.403 [2024-11-29 12:03:21.654268] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654272] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654276] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.403 [2024-11-29 12:03:21.654284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.403 [2024-11-29 12:03:21.654300] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.403 [2024-11-29 12:03:21.654365] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.403 [2024-11-29 12:03:21.654372] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.403 [2024-11-29 12:03:21.654376] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654380] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.403 [2024-11-29 12:03:21.654390] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654395] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654399] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.403 [2024-11-29 12:03:21.654406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.403 [2024-11-29 12:03:21.654422] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.403 [2024-11-29 12:03:21.654487] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.403 [2024-11-29 12:03:21.654494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.403 [2024-11-29 12:03:21.654498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654502] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.403 [2024-11-29 12:03:21.654525] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654532] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654536] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.403 [2024-11-29 12:03:21.654544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.403 [2024-11-29 12:03:21.654563] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.403 [2024-11-29 12:03:21.654623] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.403 [2024-11-29 12:03:21.654630] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.403 [2024-11-29 12:03:21.654634] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654638] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.403 [2024-11-29 12:03:21.654648] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654653] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654657] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.403 [2024-11-29 12:03:21.654665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.403 [2024-11-29 12:03:21.654681] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.403 [2024-11-29 12:03:21.654750] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.403 [2024-11-29 12:03:21.654757] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.403 [2024-11-29 12:03:21.654761] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654765] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.403 [2024-11-29 12:03:21.654776] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654780] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654784] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.403 [2024-11-29 12:03:21.654792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.403 [2024-11-29 12:03:21.654808] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.403 [2024-11-29 12:03:21.654871] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.403 [2024-11-29 12:03:21.654878] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.403 [2024-11-29 12:03:21.654882] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654886] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.403 [2024-11-29 12:03:21.654897] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654904] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.654911] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.403 [2024-11-29 12:03:21.654922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.403 [2024-11-29 12:03:21.654951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.403 [2024-11-29 12:03:21.655007] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.403 [2024-11-29 12:03:21.655014] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.403 [2024-11-29 12:03:21.655018] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.655022] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.403 [2024-11-29 12:03:21.655033] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.655038] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.655042] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.403 [2024-11-29 12:03:21.655049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.403 [2024-11-29 12:03:21.655067] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.403 [2024-11-29 12:03:21.655124] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.403 [2024-11-29 12:03:21.655137] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.403 [2024-11-29 12:03:21.655142] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.655146] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.403 [2024-11-29 12:03:21.655158] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.655162] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.655166] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.403 [2024-11-29 12:03:21.655174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.403 [2024-11-29 12:03:21.655192] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.403 [2024-11-29 12:03:21.655251] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.403 [2024-11-29 12:03:21.655258] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.403 [2024-11-29 12:03:21.655262] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.655266] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.403 [2024-11-29 12:03:21.655276] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.655281] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.655285] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.403 [2024-11-29 12:03:21.655292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.403 [2024-11-29 12:03:21.655308] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.403 [2024-11-29 12:03:21.655367] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.403 [2024-11-29 12:03:21.655378] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.403 [2024-11-29 12:03:21.655382] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.655387] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.403 [2024-11-29 12:03:21.655398] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.655403] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.403 [2024-11-29 12:03:21.655407] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.403 [2024-11-29 12:03:21.655414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.404 [2024-11-29 12:03:21.655431] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.404 [2024-11-29 12:03:21.655521] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.404 [2024-11-29 12:03:21.655531] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.404 [2024-11-29 12:03:21.655535] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.655539] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.404 [2024-11-29 12:03:21.655550] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.655555] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.655559] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.404 [2024-11-29 12:03:21.655567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.404 [2024-11-29 12:03:21.655587] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.404 [2024-11-29 12:03:21.655652] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.404 [2024-11-29 12:03:21.655659] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.404 [2024-11-29 12:03:21.655663] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.655667] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.404 [2024-11-29 12:03:21.655678] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.655682] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.655686] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.404 [2024-11-29 12:03:21.655694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.404 [2024-11-29 12:03:21.655710] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.404 [2024-11-29 12:03:21.655766] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.404 [2024-11-29 12:03:21.655773] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.404 [2024-11-29 12:03:21.655777] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.655781] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.404 [2024-11-29 12:03:21.655791] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.655796] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.655800] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.404 [2024-11-29 12:03:21.655808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.404 [2024-11-29 12:03:21.655824] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.404 [2024-11-29 12:03:21.655883] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.404 [2024-11-29 12:03:21.655890] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.404 [2024-11-29 12:03:21.655893] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.655898] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.404 [2024-11-29 12:03:21.655908] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.655913] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.655917] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.404 [2024-11-29 12:03:21.655924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.404 [2024-11-29 12:03:21.655940] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.404 [2024-11-29 12:03:21.656000] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.404 [2024-11-29 12:03:21.656006] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.404 [2024-11-29 12:03:21.656010] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656014] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.404 [2024-11-29 12:03:21.656025] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656030] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656034] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.404 [2024-11-29 12:03:21.656041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.404 [2024-11-29 12:03:21.656058] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.404 [2024-11-29 12:03:21.656128] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.404 [2024-11-29 12:03:21.656135] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.404 [2024-11-29 12:03:21.656139] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656143] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.404 [2024-11-29 12:03:21.656153] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656158] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656162] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.404 [2024-11-29 12:03:21.656169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.404 [2024-11-29 12:03:21.656186] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.404 [2024-11-29 12:03:21.656254] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.404 [2024-11-29 12:03:21.656260] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.404 [2024-11-29 12:03:21.656264] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656268] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.404 [2024-11-29 12:03:21.656279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656283] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.404 [2024-11-29 12:03:21.656295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.404 [2024-11-29 12:03:21.656311] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.404 [2024-11-29 12:03:21.656373] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.404 [2024-11-29 12:03:21.656380] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.404 [2024-11-29 12:03:21.656384] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656388] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.404 [2024-11-29 12:03:21.656398] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656403] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656407] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.404 [2024-11-29 12:03:21.656414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.404 [2024-11-29 12:03:21.656430] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.404 [2024-11-29 12:03:21.656489] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.404 [2024-11-29 12:03:21.656496] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.404 [2024-11-29 12:03:21.656500] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656504] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.404 [2024-11-29 12:03:21.656526] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656531] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656535] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.404 [2024-11-29 12:03:21.656543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.404 [2024-11-29 12:03:21.656561] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.404 [2024-11-29 12:03:21.656618] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.404 [2024-11-29 12:03:21.656625] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.404 [2024-11-29 12:03:21.656628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656633] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.404 [2024-11-29 12:03:21.656643] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656648] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.404 [2024-11-29 12:03:21.656652] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.404 [2024-11-29 12:03:21.656659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.404 [2024-11-29 12:03:21.656675] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.404 [2024-11-29 12:03:21.656738] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.404 [2024-11-29 12:03:21.656746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.405 [2024-11-29 12:03:21.656750] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.656754] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.405 [2024-11-29 12:03:21.656765] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.656770] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.656774] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.405 [2024-11-29 12:03:21.656781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.405 [2024-11-29 12:03:21.656798] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.405 [2024-11-29 12:03:21.656857] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.405 [2024-11-29 12:03:21.656863] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.405 [2024-11-29 12:03:21.656867] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.656871] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.405 [2024-11-29 12:03:21.656882] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.656886] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.656890] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.405 [2024-11-29 12:03:21.656898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.405 [2024-11-29 12:03:21.656914] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.405 [2024-11-29 12:03:21.656973] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.405 [2024-11-29 12:03:21.656980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.405 [2024-11-29 12:03:21.656984] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.656988] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.405 [2024-11-29 12:03:21.656999] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657003] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657007] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.405 [2024-11-29 12:03:21.657015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.405 [2024-11-29 12:03:21.657031] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.405 [2024-11-29 12:03:21.657093] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.405 [2024-11-29 12:03:21.657099] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.405 [2024-11-29 12:03:21.657103] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657107] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.405 [2024-11-29 12:03:21.657118] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657122] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657126] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.405 [2024-11-29 12:03:21.657134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.405 [2024-11-29 12:03:21.657150] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.405 [2024-11-29 12:03:21.657209] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.405 [2024-11-29 12:03:21.657216] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.405 [2024-11-29 12:03:21.657220] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657224] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.405 [2024-11-29 12:03:21.657234] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657239] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657243] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.405 [2024-11-29 12:03:21.657250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.405 [2024-11-29 12:03:21.657266] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.405 [2024-11-29 12:03:21.657331] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.405 [2024-11-29 12:03:21.657338] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.405 [2024-11-29 12:03:21.657341] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657345] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.405 [2024-11-29 12:03:21.657356] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657360] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657364] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.405 [2024-11-29 12:03:21.657372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.405 [2024-11-29 12:03:21.657388] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.405 [2024-11-29 12:03:21.657459] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.405 [2024-11-29 12:03:21.657474] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.405 [2024-11-29 12:03:21.657479] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657483] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.405 [2024-11-29 12:03:21.657494] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657499] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657503] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.405 [2024-11-29 12:03:21.657521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.405 [2024-11-29 12:03:21.657542] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.405 [2024-11-29 12:03:21.657603] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.405 [2024-11-29 12:03:21.657610] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.405 [2024-11-29 12:03:21.657613] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657618] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.405 [2024-11-29 12:03:21.657628] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657633] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657637] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.405 [2024-11-29 12:03:21.657644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.405 [2024-11-29 12:03:21.657661] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.405 [2024-11-29 12:03:21.657726] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.405 [2024-11-29 12:03:21.657734] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.405 [2024-11-29 12:03:21.657738] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657742] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.405 [2024-11-29 12:03:21.657753] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657758] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657762] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.405 [2024-11-29 12:03:21.657769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.405 [2024-11-29 12:03:21.657786] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.405 [2024-11-29 12:03:21.657840] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.405 [2024-11-29 12:03:21.657847] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.405 [2024-11-29 12:03:21.657851] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657855] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.405 [2024-11-29 12:03:21.657866] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657870] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657874] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.405 [2024-11-29 12:03:21.657882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.405 [2024-11-29 12:03:21.657898] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.405 [2024-11-29 12:03:21.657957] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.405 [2024-11-29 12:03:21.657963] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.405 [2024-11-29 12:03:21.657967] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657971] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.405 [2024-11-29 12:03:21.657982] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657987] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.657991] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.405 [2024-11-29 12:03:21.657998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.405 [2024-11-29 12:03:21.658014] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.405 [2024-11-29 12:03:21.658076] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.405 [2024-11-29 12:03:21.658083] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.405 [2024-11-29 12:03:21.658087] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.658091] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.405 [2024-11-29 12:03:21.658101] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.658106] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.405 [2024-11-29 12:03:21.658110] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.405 [2024-11-29 12:03:21.658117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.405 [2024-11-29 12:03:21.658133] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.405 [2024-11-29 12:03:21.658190] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.405 [2024-11-29 12:03:21.658197] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.406 [2024-11-29 12:03:21.658201] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.658205] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.406 [2024-11-29 12:03:21.658215] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.658220] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.658224] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.406 [2024-11-29 12:03:21.658231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.406 [2024-11-29 12:03:21.658247] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.406 [2024-11-29 12:03:21.658312] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.406 [2024-11-29 12:03:21.658319] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.406 [2024-11-29 12:03:21.658323] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.658327] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.406 [2024-11-29 12:03:21.658338] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.658342] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.658346] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.406 [2024-11-29 12:03:21.658354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.406 [2024-11-29 12:03:21.658370] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.406 [2024-11-29 12:03:21.658435] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.406 [2024-11-29 12:03:21.658442] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.406 [2024-11-29 12:03:21.658446] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.658450] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.406 [2024-11-29 12:03:21.658460] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.658465] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.658469] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.406 [2024-11-29 12:03:21.658476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.406 [2024-11-29 12:03:21.658499] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.406 [2024-11-29 12:03:21.662528] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.406 [2024-11-29 12:03:21.662545] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.406 [2024-11-29 12:03:21.662550] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.662555] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.406 [2024-11-29 12:03:21.662570] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.662575] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.662579] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4ee540) 00:19:16.406 [2024-11-29 12:03:21.662589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.406 [2024-11-29 12:03:21.662616] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x527640, cid 3, qid 0 00:19:16.406 [2024-11-29 12:03:21.662683] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.406 [2024-11-29 12:03:21.662690] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.406 [2024-11-29 12:03:21.662694] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.662698] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x527640) on tqpair=0x4ee540 00:19:16.406 [2024-11-29 12:03:21.662707] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:19:16.406 00:19:16.406 12:03:21 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:16.406 [2024-11-29 12:03:21.705306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:16.406 [2024-11-29 12:03:21.705365] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80913 ] 00:19:16.406 [2024-11-29 12:03:21.846600] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:16.406 [2024-11-29 12:03:21.846675] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:16.406 [2024-11-29 12:03:21.846683] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:16.406 [2024-11-29 12:03:21.846699] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:16.406 [2024-11-29 12:03:21.846715] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:19:16.406 [2024-11-29 12:03:21.846895] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:16.406 [2024-11-29 12:03:21.846976] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2079540 0 00:19:16.406 [2024-11-29 12:03:21.854535] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:16.406 [2024-11-29 12:03:21.854561] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:16.406 [2024-11-29 12:03:21.854567] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:16.406 [2024-11-29 12:03:21.854571] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:16.406 [2024-11-29 12:03:21.854626] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.854634] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.854639] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2079540) 00:19:16.406 [2024-11-29 12:03:21.854656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:16.406 [2024-11-29 12:03:21.854690] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2220, cid 0, qid 0 00:19:16.406 [2024-11-29 12:03:21.862533] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.406 [2024-11-29 12:03:21.862556] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.406 [2024-11-29 12:03:21.862562] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.862567] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2220) on tqpair=0x2079540 00:19:16.406 [2024-11-29 12:03:21.862583] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:16.406 [2024-11-29 12:03:21.862592] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:16.406 [2024-11-29 12:03:21.862599] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:16.406 [2024-11-29 12:03:21.862617] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.862623] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.862627] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2079540) 00:19:16.406 [2024-11-29 12:03:21.862637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.406 [2024-11-29 12:03:21.862667] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2220, cid 0, qid 0 00:19:16.406 [2024-11-29 12:03:21.862758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.406 [2024-11-29 12:03:21.862766] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.406 [2024-11-29 12:03:21.862770] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.862774] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2220) on tqpair=0x2079540 00:19:16.406 [2024-11-29 12:03:21.862781] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:16.406 [2024-11-29 12:03:21.862790] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:16.406 [2024-11-29 12:03:21.862799] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.862803] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.862807] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2079540) 00:19:16.406 [2024-11-29 12:03:21.862816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.406 [2024-11-29 12:03:21.862836] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2220, cid 0, qid 0 00:19:16.406 [2024-11-29 12:03:21.862900] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.406 [2024-11-29 12:03:21.862912] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.406 [2024-11-29 12:03:21.862919] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.862925] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2220) on tqpair=0x2079540 00:19:16.406 [2024-11-29 12:03:21.862936] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:16.406 [2024-11-29 12:03:21.862952] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:16.406 [2024-11-29 12:03:21.862963] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.862967] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.862972] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2079540) 00:19:16.406 [2024-11-29 12:03:21.862980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.406 [2024-11-29 12:03:21.863003] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2220, cid 0, qid 0 00:19:16.406 [2024-11-29 12:03:21.863063] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.406 [2024-11-29 12:03:21.863070] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.406 [2024-11-29 12:03:21.863074] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.863079] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2220) on tqpair=0x2079540 00:19:16.406 [2024-11-29 12:03:21.863086] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:16.406 [2024-11-29 12:03:21.863097] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.863103] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.406 [2024-11-29 12:03:21.863107] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2079540) 00:19:16.406 [2024-11-29 12:03:21.863114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.407 [2024-11-29 12:03:21.863134] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2220, cid 0, qid 0 00:19:16.407 [2024-11-29 12:03:21.863204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.407 [2024-11-29 12:03:21.863211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.407 [2024-11-29 12:03:21.863215] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.863219] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2220) on tqpair=0x2079540 00:19:16.407 [2024-11-29 12:03:21.863226] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:16.407 [2024-11-29 12:03:21.863232] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:16.407 [2024-11-29 12:03:21.863241] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:16.407 [2024-11-29 12:03:21.863347] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:16.407 [2024-11-29 12:03:21.863361] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:16.407 [2024-11-29 12:03:21.863373] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.863378] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.863382] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2079540) 00:19:16.407 [2024-11-29 12:03:21.863390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.407 [2024-11-29 12:03:21.863410] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2220, cid 0, qid 0 00:19:16.407 [2024-11-29 12:03:21.863487] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.407 [2024-11-29 12:03:21.863496] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.407 [2024-11-29 12:03:21.863500] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.863504] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2220) on tqpair=0x2079540 00:19:16.407 [2024-11-29 12:03:21.863532] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:16.407 [2024-11-29 12:03:21.863546] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.863551] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.863555] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2079540) 00:19:16.407 [2024-11-29 12:03:21.863563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.407 [2024-11-29 12:03:21.863591] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2220, cid 0, qid 0 00:19:16.407 [2024-11-29 12:03:21.863661] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.407 [2024-11-29 12:03:21.863668] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.407 [2024-11-29 12:03:21.863672] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.863676] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2220) on tqpair=0x2079540 00:19:16.407 [2024-11-29 12:03:21.863683] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:16.407 [2024-11-29 12:03:21.863689] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:16.407 [2024-11-29 12:03:21.863697] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:16.407 [2024-11-29 12:03:21.863717] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:16.407 [2024-11-29 12:03:21.863728] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.863732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.863737] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2079540) 00:19:16.407 [2024-11-29 12:03:21.863745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.407 [2024-11-29 12:03:21.863765] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2220, cid 0, qid 0 00:19:16.407 [2024-11-29 12:03:21.863891] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:16.407 [2024-11-29 12:03:21.863904] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:16.407 [2024-11-29 12:03:21.863908] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.863913] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2079540): datao=0, datal=4096, cccid=0 00:19:16.407 [2024-11-29 12:03:21.863918] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20b2220) on tqpair(0x2079540): expected_datao=0, payload_size=4096 00:19:16.407 [2024-11-29 12:03:21.863928] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.863933] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.863942] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.407 [2024-11-29 12:03:21.863948] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.407 [2024-11-29 12:03:21.863952] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.863956] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2220) on tqpair=0x2079540 00:19:16.407 [2024-11-29 12:03:21.863967] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:16.407 [2024-11-29 12:03:21.863973] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:16.407 [2024-11-29 12:03:21.863978] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:16.407 [2024-11-29 12:03:21.863983] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:16.407 [2024-11-29 12:03:21.863988] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:16.407 [2024-11-29 12:03:21.863993] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:16.407 [2024-11-29 12:03:21.864008] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:16.407 [2024-11-29 12:03:21.864018] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.864022] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.864026] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2079540) 00:19:16.407 [2024-11-29 12:03:21.864034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:16.407 [2024-11-29 12:03:21.864056] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2220, cid 0, qid 0 00:19:16.407 [2024-11-29 12:03:21.864122] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.407 [2024-11-29 12:03:21.864129] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.407 [2024-11-29 12:03:21.864133] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.864137] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2220) on tqpair=0x2079540 00:19:16.407 [2024-11-29 12:03:21.864147] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.864152] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.864156] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2079540) 00:19:16.407 [2024-11-29 12:03:21.864163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.407 [2024-11-29 12:03:21.864170] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.864174] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.864178] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2079540) 00:19:16.407 [2024-11-29 12:03:21.864184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.407 [2024-11-29 12:03:21.864190] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.864194] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.864198] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2079540) 00:19:16.407 [2024-11-29 12:03:21.864204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.407 [2024-11-29 12:03:21.864210] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.864214] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.864218] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.407 [2024-11-29 12:03:21.864224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.407 [2024-11-29 12:03:21.864230] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:16.407 [2024-11-29 12:03:21.864245] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:16.407 [2024-11-29 12:03:21.864253] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.864257] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.864261] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2079540) 00:19:16.407 [2024-11-29 12:03:21.864268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.407 [2024-11-29 12:03:21.864290] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2220, cid 0, qid 0 00:19:16.407 [2024-11-29 12:03:21.864298] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2380, cid 1, qid 0 00:19:16.407 [2024-11-29 12:03:21.864303] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b24e0, cid 2, qid 0 00:19:16.407 [2024-11-29 12:03:21.864308] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.407 [2024-11-29 12:03:21.864313] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b27a0, cid 4, qid 0 00:19:16.407 [2024-11-29 12:03:21.864420] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.407 [2024-11-29 12:03:21.864427] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.407 [2024-11-29 12:03:21.864431] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.407 [2024-11-29 12:03:21.864435] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b27a0) on tqpair=0x2079540 00:19:16.407 [2024-11-29 12:03:21.864442] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:16.407 [2024-11-29 12:03:21.864448] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:16.407 [2024-11-29 12:03:21.864457] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:16.408 [2024-11-29 12:03:21.864470] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:16.408 [2024-11-29 12:03:21.864477] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.864482] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.864486] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2079540) 00:19:16.408 [2024-11-29 12:03:21.864494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:16.408 [2024-11-29 12:03:21.864527] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b27a0, cid 4, qid 0 00:19:16.408 [2024-11-29 12:03:21.864603] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.408 [2024-11-29 12:03:21.864611] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.408 [2024-11-29 12:03:21.864615] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.864619] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b27a0) on tqpair=0x2079540 00:19:16.408 [2024-11-29 12:03:21.864683] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:16.408 [2024-11-29 12:03:21.864694] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:16.408 [2024-11-29 12:03:21.864704] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.864708] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.864712] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2079540) 00:19:16.408 [2024-11-29 12:03:21.864720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.408 [2024-11-29 12:03:21.864740] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b27a0, cid 4, qid 0 00:19:16.408 [2024-11-29 12:03:21.864817] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:16.408 [2024-11-29 12:03:21.864824] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:16.408 [2024-11-29 12:03:21.864828] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.864832] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2079540): datao=0, datal=4096, cccid=4 00:19:16.408 [2024-11-29 12:03:21.864837] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20b27a0) on tqpair(0x2079540): expected_datao=0, payload_size=4096 00:19:16.408 [2024-11-29 12:03:21.864846] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.864850] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.864859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.408 [2024-11-29 12:03:21.864865] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.408 [2024-11-29 12:03:21.864869] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.864873] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b27a0) on tqpair=0x2079540 00:19:16.408 [2024-11-29 12:03:21.864891] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:16.408 [2024-11-29 12:03:21.864904] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:16.408 [2024-11-29 12:03:21.864916] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:16.408 [2024-11-29 12:03:21.864924] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.864928] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.864932] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2079540) 00:19:16.408 [2024-11-29 12:03:21.864940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.408 [2024-11-29 12:03:21.864960] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b27a0, cid 4, qid 0 00:19:16.408 [2024-11-29 12:03:21.865053] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:16.408 [2024-11-29 12:03:21.865060] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:16.408 [2024-11-29 12:03:21.865064] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.865068] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2079540): datao=0, datal=4096, cccid=4 00:19:16.408 [2024-11-29 12:03:21.865073] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20b27a0) on tqpair(0x2079540): expected_datao=0, payload_size=4096 00:19:16.408 [2024-11-29 12:03:21.865081] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.865085] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.865094] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.408 [2024-11-29 12:03:21.865101] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.408 [2024-11-29 12:03:21.865104] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.865109] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b27a0) on tqpair=0x2079540 00:19:16.408 [2024-11-29 12:03:21.865127] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:16.408 [2024-11-29 12:03:21.865139] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:16.408 [2024-11-29 12:03:21.865148] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.865152] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.865156] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2079540) 00:19:16.408 [2024-11-29 12:03:21.865164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.408 [2024-11-29 12:03:21.865184] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b27a0, cid 4, qid 0 00:19:16.408 [2024-11-29 12:03:21.865267] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:16.408 [2024-11-29 12:03:21.865276] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:16.408 [2024-11-29 12:03:21.865280] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.865284] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2079540): datao=0, datal=4096, cccid=4 00:19:16.408 [2024-11-29 12:03:21.865289] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20b27a0) on tqpair(0x2079540): expected_datao=0, payload_size=4096 00:19:16.408 [2024-11-29 12:03:21.865297] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.865302] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.865311] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.408 [2024-11-29 12:03:21.865318] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.408 [2024-11-29 12:03:21.865322] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.865326] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b27a0) on tqpair=0x2079540 00:19:16.408 [2024-11-29 12:03:21.865336] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:16.408 [2024-11-29 12:03:21.865345] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:16.408 [2024-11-29 12:03:21.865357] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:16.408 [2024-11-29 12:03:21.865365] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:16.408 [2024-11-29 12:03:21.865371] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:16.408 [2024-11-29 12:03:21.865377] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:16.408 [2024-11-29 12:03:21.865382] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:16.408 [2024-11-29 12:03:21.865388] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:16.408 [2024-11-29 12:03:21.865407] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.865412] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.865416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2079540) 00:19:16.408 [2024-11-29 12:03:21.865424] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.408 [2024-11-29 12:03:21.865431] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.865435] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.865439] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2079540) 00:19:16.408 [2024-11-29 12:03:21.865446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.408 [2024-11-29 12:03:21.865472] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b27a0, cid 4, qid 0 00:19:16.408 [2024-11-29 12:03:21.865480] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2900, cid 5, qid 0 00:19:16.408 [2024-11-29 12:03:21.865585] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.408 [2024-11-29 12:03:21.865594] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.408 [2024-11-29 12:03:21.865598] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.408 [2024-11-29 12:03:21.865602] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b27a0) on tqpair=0x2079540 00:19:16.408 [2024-11-29 12:03:21.865610] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.408 [2024-11-29 12:03:21.865617] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.408 [2024-11-29 12:03:21.865621] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.865625] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2900) on tqpair=0x2079540 00:19:16.409 [2024-11-29 12:03:21.865637] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.865642] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.865646] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2079540) 00:19:16.409 [2024-11-29 12:03:21.865653] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.409 [2024-11-29 12:03:21.865674] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2900, cid 5, qid 0 00:19:16.409 [2024-11-29 12:03:21.865739] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.409 [2024-11-29 12:03:21.865746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.409 [2024-11-29 12:03:21.865750] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.865755] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2900) on tqpair=0x2079540 00:19:16.409 [2024-11-29 12:03:21.865767] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.865772] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.865776] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2079540) 00:19:16.409 [2024-11-29 12:03:21.865783] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.409 [2024-11-29 12:03:21.865801] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2900, cid 5, qid 0 00:19:16.409 [2024-11-29 12:03:21.865862] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.409 [2024-11-29 12:03:21.865869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.409 [2024-11-29 12:03:21.865873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.865877] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2900) on tqpair=0x2079540 00:19:16.409 [2024-11-29 12:03:21.865889] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.865894] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.865898] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2079540) 00:19:16.409 [2024-11-29 12:03:21.865905] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.409 [2024-11-29 12:03:21.865923] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2900, cid 5, qid 0 00:19:16.409 [2024-11-29 12:03:21.865979] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.409 [2024-11-29 12:03:21.865986] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.409 [2024-11-29 12:03:21.865990] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.865994] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2900) on tqpair=0x2079540 00:19:16.409 [2024-11-29 12:03:21.866009] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866014] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866019] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2079540) 00:19:16.409 [2024-11-29 12:03:21.866026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.409 [2024-11-29 12:03:21.866034] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866038] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2079540) 00:19:16.409 [2024-11-29 12:03:21.866050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.409 [2024-11-29 12:03:21.866057] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866061] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866065] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2079540) 00:19:16.409 [2024-11-29 12:03:21.866072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.409 [2024-11-29 12:03:21.866081] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866085] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866089] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2079540) 00:19:16.409 [2024-11-29 12:03:21.866096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.409 [2024-11-29 12:03:21.866117] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2900, cid 5, qid 0 00:19:16.409 [2024-11-29 12:03:21.866124] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b27a0, cid 4, qid 0 00:19:16.409 [2024-11-29 12:03:21.866129] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2a60, cid 6, qid 0 00:19:16.409 [2024-11-29 12:03:21.866134] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2bc0, cid 7, qid 0 00:19:16.409 [2024-11-29 12:03:21.866283] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:16.409 [2024-11-29 12:03:21.866291] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:16.409 [2024-11-29 12:03:21.866295] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866299] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2079540): datao=0, datal=8192, cccid=5 00:19:16.409 [2024-11-29 12:03:21.866304] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20b2900) on tqpair(0x2079540): expected_datao=0, payload_size=8192 00:19:16.409 [2024-11-29 12:03:21.866322] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866327] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866333] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:16.409 [2024-11-29 12:03:21.866339] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:16.409 [2024-11-29 12:03:21.866343] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866347] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2079540): datao=0, datal=512, cccid=4 00:19:16.409 [2024-11-29 12:03:21.866352] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20b27a0) on tqpair(0x2079540): expected_datao=0, payload_size=512 00:19:16.409 [2024-11-29 12:03:21.866359] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866363] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866369] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:16.409 [2024-11-29 12:03:21.866375] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:16.409 [2024-11-29 12:03:21.866379] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866383] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2079540): datao=0, datal=512, cccid=6 00:19:16.409 [2024-11-29 12:03:21.866387] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20b2a60) on tqpair(0x2079540): expected_datao=0, payload_size=512 00:19:16.409 [2024-11-29 12:03:21.866395] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866399] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866405] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:16.409 [2024-11-29 12:03:21.866411] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:16.409 [2024-11-29 12:03:21.866414] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866418] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2079540): datao=0, datal=4096, cccid=7 00:19:16.409 [2024-11-29 12:03:21.866423] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20b2bc0) on tqpair(0x2079540): expected_datao=0, payload_size=4096 00:19:16.409 [2024-11-29 12:03:21.866430] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866434] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.409 [2024-11-29 12:03:21.866449] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.409 [2024-11-29 12:03:21.866453] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866460] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2900) on tqpair=0x2079540 00:19:16.409 [2024-11-29 12:03:21.866479] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.409 [2024-11-29 12:03:21.866487] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.409 [2024-11-29 12:03:21.866490] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.866495] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b27a0) on tqpair=0x2079540 00:19:16.409 [2024-11-29 12:03:21.870521] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.409 [2024-11-29 12:03:21.870542] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.409 [2024-11-29 12:03:21.870547] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.870552] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2a60) on tqpair=0x2079540 00:19:16.409 [2024-11-29 12:03:21.870562] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.409 [2024-11-29 12:03:21.870569] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.409 [2024-11-29 12:03:21.870573] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.409 [2024-11-29 12:03:21.870577] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2bc0) on tqpair=0x2079540 00:19:16.409 ===================================================== 00:19:16.409 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:16.409 ===================================================== 00:19:16.409 Controller Capabilities/Features 00:19:16.409 ================================ 00:19:16.409 Vendor ID: 8086 00:19:16.409 Subsystem Vendor ID: 8086 00:19:16.409 Serial Number: SPDK00000000000001 00:19:16.409 Model Number: SPDK bdev Controller 00:19:16.409 Firmware Version: 24.01.1 00:19:16.409 Recommended Arb Burst: 6 00:19:16.409 IEEE OUI Identifier: e4 d2 5c 00:19:16.409 Multi-path I/O 00:19:16.409 May have multiple subsystem ports: Yes 00:19:16.409 May have multiple controllers: Yes 00:19:16.409 Associated with SR-IOV VF: No 00:19:16.409 Max Data Transfer Size: 131072 00:19:16.409 Max Number of Namespaces: 32 00:19:16.409 Max Number of I/O Queues: 127 00:19:16.409 NVMe Specification Version (VS): 1.3 00:19:16.409 NVMe Specification Version (Identify): 1.3 00:19:16.409 Maximum Queue Entries: 128 00:19:16.409 Contiguous Queues Required: Yes 00:19:16.409 Arbitration Mechanisms Supported 00:19:16.409 Weighted Round Robin: Not Supported 00:19:16.410 Vendor Specific: Not Supported 00:19:16.410 Reset Timeout: 15000 ms 00:19:16.410 Doorbell Stride: 4 bytes 00:19:16.410 NVM Subsystem Reset: Not Supported 00:19:16.410 Command Sets Supported 00:19:16.410 NVM Command Set: Supported 00:19:16.410 Boot Partition: Not Supported 00:19:16.410 Memory Page Size Minimum: 4096 bytes 00:19:16.410 Memory Page Size Maximum: 4096 bytes 00:19:16.410 Persistent Memory Region: Not Supported 00:19:16.410 Optional Asynchronous Events Supported 00:19:16.410 Namespace Attribute Notices: Supported 00:19:16.410 Firmware Activation Notices: Not Supported 00:19:16.410 ANA Change Notices: Not Supported 00:19:16.410 PLE Aggregate Log Change Notices: Not Supported 00:19:16.410 LBA Status Info Alert Notices: Not Supported 00:19:16.410 EGE Aggregate Log Change Notices: Not Supported 00:19:16.410 Normal NVM Subsystem Shutdown event: Not Supported 00:19:16.410 Zone Descriptor Change Notices: Not Supported 00:19:16.410 Discovery Log Change Notices: Not Supported 00:19:16.410 Controller Attributes 00:19:16.410 128-bit Host Identifier: Supported 00:19:16.410 Non-Operational Permissive Mode: Not Supported 00:19:16.410 NVM Sets: Not Supported 00:19:16.410 Read Recovery Levels: Not Supported 00:19:16.410 Endurance Groups: Not Supported 00:19:16.410 Predictable Latency Mode: Not Supported 00:19:16.410 Traffic Based Keep ALive: Not Supported 00:19:16.410 Namespace Granularity: Not Supported 00:19:16.410 SQ Associations: Not Supported 00:19:16.410 UUID List: Not Supported 00:19:16.410 Multi-Domain Subsystem: Not Supported 00:19:16.410 Fixed Capacity Management: Not Supported 00:19:16.410 Variable Capacity Management: Not Supported 00:19:16.410 Delete Endurance Group: Not Supported 00:19:16.410 Delete NVM Set: Not Supported 00:19:16.410 Extended LBA Formats Supported: Not Supported 00:19:16.410 Flexible Data Placement Supported: Not Supported 00:19:16.410 00:19:16.410 Controller Memory Buffer Support 00:19:16.410 ================================ 00:19:16.410 Supported: No 00:19:16.410 00:19:16.410 Persistent Memory Region Support 00:19:16.410 ================================ 00:19:16.410 Supported: No 00:19:16.410 00:19:16.410 Admin Command Set Attributes 00:19:16.410 ============================ 00:19:16.410 Security Send/Receive: Not Supported 00:19:16.410 Format NVM: Not Supported 00:19:16.410 Firmware Activate/Download: Not Supported 00:19:16.410 Namespace Management: Not Supported 00:19:16.410 Device Self-Test: Not Supported 00:19:16.410 Directives: Not Supported 00:19:16.410 NVMe-MI: Not Supported 00:19:16.410 Virtualization Management: Not Supported 00:19:16.410 Doorbell Buffer Config: Not Supported 00:19:16.410 Get LBA Status Capability: Not Supported 00:19:16.410 Command & Feature Lockdown Capability: Not Supported 00:19:16.410 Abort Command Limit: 4 00:19:16.410 Async Event Request Limit: 4 00:19:16.410 Number of Firmware Slots: N/A 00:19:16.410 Firmware Slot 1 Read-Only: N/A 00:19:16.410 Firmware Activation Without Reset: N/A 00:19:16.410 Multiple Update Detection Support: N/A 00:19:16.410 Firmware Update Granularity: No Information Provided 00:19:16.410 Per-Namespace SMART Log: No 00:19:16.410 Asymmetric Namespace Access Log Page: Not Supported 00:19:16.410 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:16.410 Command Effects Log Page: Supported 00:19:16.410 Get Log Page Extended Data: Supported 00:19:16.410 Telemetry Log Pages: Not Supported 00:19:16.410 Persistent Event Log Pages: Not Supported 00:19:16.410 Supported Log Pages Log Page: May Support 00:19:16.410 Commands Supported & Effects Log Page: Not Supported 00:19:16.410 Feature Identifiers & Effects Log Page:May Support 00:19:16.410 NVMe-MI Commands & Effects Log Page: May Support 00:19:16.410 Data Area 4 for Telemetry Log: Not Supported 00:19:16.410 Error Log Page Entries Supported: 128 00:19:16.410 Keep Alive: Supported 00:19:16.410 Keep Alive Granularity: 10000 ms 00:19:16.410 00:19:16.410 NVM Command Set Attributes 00:19:16.410 ========================== 00:19:16.410 Submission Queue Entry Size 00:19:16.410 Max: 64 00:19:16.410 Min: 64 00:19:16.410 Completion Queue Entry Size 00:19:16.410 Max: 16 00:19:16.410 Min: 16 00:19:16.410 Number of Namespaces: 32 00:19:16.410 Compare Command: Supported 00:19:16.410 Write Uncorrectable Command: Not Supported 00:19:16.410 Dataset Management Command: Supported 00:19:16.410 Write Zeroes Command: Supported 00:19:16.410 Set Features Save Field: Not Supported 00:19:16.410 Reservations: Supported 00:19:16.410 Timestamp: Not Supported 00:19:16.410 Copy: Supported 00:19:16.410 Volatile Write Cache: Present 00:19:16.410 Atomic Write Unit (Normal): 1 00:19:16.410 Atomic Write Unit (PFail): 1 00:19:16.410 Atomic Compare & Write Unit: 1 00:19:16.410 Fused Compare & Write: Supported 00:19:16.410 Scatter-Gather List 00:19:16.410 SGL Command Set: Supported 00:19:16.410 SGL Keyed: Supported 00:19:16.410 SGL Bit Bucket Descriptor: Not Supported 00:19:16.410 SGL Metadata Pointer: Not Supported 00:19:16.410 Oversized SGL: Not Supported 00:19:16.410 SGL Metadata Address: Not Supported 00:19:16.410 SGL Offset: Supported 00:19:16.410 Transport SGL Data Block: Not Supported 00:19:16.410 Replay Protected Memory Block: Not Supported 00:19:16.410 00:19:16.410 Firmware Slot Information 00:19:16.410 ========================= 00:19:16.410 Active slot: 1 00:19:16.410 Slot 1 Firmware Revision: 24.01.1 00:19:16.410 00:19:16.410 00:19:16.410 Commands Supported and Effects 00:19:16.410 ============================== 00:19:16.410 Admin Commands 00:19:16.410 -------------- 00:19:16.410 Get Log Page (02h): Supported 00:19:16.410 Identify (06h): Supported 00:19:16.410 Abort (08h): Supported 00:19:16.410 Set Features (09h): Supported 00:19:16.410 Get Features (0Ah): Supported 00:19:16.410 Asynchronous Event Request (0Ch): Supported 00:19:16.410 Keep Alive (18h): Supported 00:19:16.410 I/O Commands 00:19:16.410 ------------ 00:19:16.410 Flush (00h): Supported LBA-Change 00:19:16.410 Write (01h): Supported LBA-Change 00:19:16.410 Read (02h): Supported 00:19:16.410 Compare (05h): Supported 00:19:16.410 Write Zeroes (08h): Supported LBA-Change 00:19:16.410 Dataset Management (09h): Supported LBA-Change 00:19:16.410 Copy (19h): Supported LBA-Change 00:19:16.410 Unknown (79h): Supported LBA-Change 00:19:16.410 Unknown (7Ah): Supported 00:19:16.410 00:19:16.410 Error Log 00:19:16.410 ========= 00:19:16.410 00:19:16.410 Arbitration 00:19:16.410 =========== 00:19:16.410 Arbitration Burst: 1 00:19:16.410 00:19:16.410 Power Management 00:19:16.410 ================ 00:19:16.410 Number of Power States: 1 00:19:16.410 Current Power State: Power State #0 00:19:16.410 Power State #0: 00:19:16.410 Max Power: 0.00 W 00:19:16.410 Non-Operational State: Operational 00:19:16.410 Entry Latency: Not Reported 00:19:16.410 Exit Latency: Not Reported 00:19:16.410 Relative Read Throughput: 0 00:19:16.410 Relative Read Latency: 0 00:19:16.410 Relative Write Throughput: 0 00:19:16.410 Relative Write Latency: 0 00:19:16.410 Idle Power: Not Reported 00:19:16.410 Active Power: Not Reported 00:19:16.410 Non-Operational Permissive Mode: Not Supported 00:19:16.410 00:19:16.410 Health Information 00:19:16.410 ================== 00:19:16.410 Critical Warnings: 00:19:16.410 Available Spare Space: OK 00:19:16.410 Temperature: OK 00:19:16.410 Device Reliability: OK 00:19:16.410 Read Only: No 00:19:16.410 Volatile Memory Backup: OK 00:19:16.410 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:16.410 Temperature Threshold: [2024-11-29 12:03:21.870712] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.410 [2024-11-29 12:03:21.870720] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.410 [2024-11-29 12:03:21.870725] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2079540) 00:19:16.410 [2024-11-29 12:03:21.870734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.410 [2024-11-29 12:03:21.870764] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2bc0, cid 7, qid 0 00:19:16.410 [2024-11-29 12:03:21.870837] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.410 [2024-11-29 12:03:21.870844] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.410 [2024-11-29 12:03:21.870848] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.410 [2024-11-29 12:03:21.870853] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2bc0) on tqpair=0x2079540 00:19:16.410 [2024-11-29 12:03:21.870893] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:16.410 [2024-11-29 12:03:21.870914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.410 [2024-11-29 12:03:21.870926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.410 [2024-11-29 12:03:21.870936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.411 [2024-11-29 12:03:21.870946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.411 [2024-11-29 12:03:21.870960] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.870965] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.870969] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.411 [2024-11-29 12:03:21.870978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.411 [2024-11-29 12:03:21.871005] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.411 [2024-11-29 12:03:21.871083] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.411 [2024-11-29 12:03:21.871099] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.411 [2024-11-29 12:03:21.871103] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871107] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.411 [2024-11-29 12:03:21.871116] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871121] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871125] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.411 [2024-11-29 12:03:21.871133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.411 [2024-11-29 12:03:21.871156] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.411 [2024-11-29 12:03:21.871250] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.411 [2024-11-29 12:03:21.871273] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.411 [2024-11-29 12:03:21.871278] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871282] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.411 [2024-11-29 12:03:21.871289] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:16.411 [2024-11-29 12:03:21.871295] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:16.411 [2024-11-29 12:03:21.871306] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871311] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871315] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.411 [2024-11-29 12:03:21.871323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.411 [2024-11-29 12:03:21.871343] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.411 [2024-11-29 12:03:21.871409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.411 [2024-11-29 12:03:21.871423] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.411 [2024-11-29 12:03:21.871428] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871433] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.411 [2024-11-29 12:03:21.871457] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871463] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871468] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.411 [2024-11-29 12:03:21.871476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.411 [2024-11-29 12:03:21.871496] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.411 [2024-11-29 12:03:21.871581] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.411 [2024-11-29 12:03:21.871590] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.411 [2024-11-29 12:03:21.871594] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871599] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.411 [2024-11-29 12:03:21.871611] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871616] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871620] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.411 [2024-11-29 12:03:21.871628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.411 [2024-11-29 12:03:21.871649] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.411 [2024-11-29 12:03:21.871708] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.411 [2024-11-29 12:03:21.871715] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.411 [2024-11-29 12:03:21.871719] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871724] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.411 [2024-11-29 12:03:21.871735] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871740] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871744] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.411 [2024-11-29 12:03:21.871752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.411 [2024-11-29 12:03:21.871771] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.411 [2024-11-29 12:03:21.871840] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.411 [2024-11-29 12:03:21.871847] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.411 [2024-11-29 12:03:21.871851] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871855] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.411 [2024-11-29 12:03:21.871867] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871872] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871876] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.411 [2024-11-29 12:03:21.871884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.411 [2024-11-29 12:03:21.871902] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.411 [2024-11-29 12:03:21.871966] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.411 [2024-11-29 12:03:21.871978] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.411 [2024-11-29 12:03:21.871982] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.871987] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.411 [2024-11-29 12:03:21.871999] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.872004] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.872008] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.411 [2024-11-29 12:03:21.872016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.411 [2024-11-29 12:03:21.872035] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.411 [2024-11-29 12:03:21.872099] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.411 [2024-11-29 12:03:21.872106] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.411 [2024-11-29 12:03:21.872110] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.872114] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.411 [2024-11-29 12:03:21.872126] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.872131] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.872135] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.411 [2024-11-29 12:03:21.872143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.411 [2024-11-29 12:03:21.872161] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.411 [2024-11-29 12:03:21.872238] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.411 [2024-11-29 12:03:21.872244] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.411 [2024-11-29 12:03:21.872248] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.872252] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.411 [2024-11-29 12:03:21.872264] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.872269] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.872273] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.411 [2024-11-29 12:03:21.872281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.411 [2024-11-29 12:03:21.872300] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.411 [2024-11-29 12:03:21.872372] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.411 [2024-11-29 12:03:21.872378] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.411 [2024-11-29 12:03:21.872382] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.872386] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.411 [2024-11-29 12:03:21.872398] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.872403] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.872407] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.411 [2024-11-29 12:03:21.872415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.411 [2024-11-29 12:03:21.872434] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.411 [2024-11-29 12:03:21.872503] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.411 [2024-11-29 12:03:21.872521] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.411 [2024-11-29 12:03:21.872526] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.872530] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.411 [2024-11-29 12:03:21.872543] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.872548] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.411 [2024-11-29 12:03:21.872552] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.411 [2024-11-29 12:03:21.872560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.411 [2024-11-29 12:03:21.872580] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.411 [2024-11-29 12:03:21.872646] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.412 [2024-11-29 12:03:21.872652] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.412 [2024-11-29 12:03:21.872656] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.872661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.412 [2024-11-29 12:03:21.872672] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.872677] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.872682] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.412 [2024-11-29 12:03:21.872689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.412 [2024-11-29 12:03:21.872708] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.412 [2024-11-29 12:03:21.872771] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.412 [2024-11-29 12:03:21.872778] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.412 [2024-11-29 12:03:21.872782] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.872786] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.412 [2024-11-29 12:03:21.872798] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.872803] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.872807] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.412 [2024-11-29 12:03:21.872814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.412 [2024-11-29 12:03:21.872833] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.412 [2024-11-29 12:03:21.872896] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.412 [2024-11-29 12:03:21.872903] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.412 [2024-11-29 12:03:21.872907] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.872911] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.412 [2024-11-29 12:03:21.872923] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.872928] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.872932] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.412 [2024-11-29 12:03:21.872940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.412 [2024-11-29 12:03:21.872959] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.412 [2024-11-29 12:03:21.873017] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.412 [2024-11-29 12:03:21.873023] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.412 [2024-11-29 12:03:21.873027] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873031] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.412 [2024-11-29 12:03:21.873043] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873048] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873052] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.412 [2024-11-29 12:03:21.873060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.412 [2024-11-29 12:03:21.873079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.412 [2024-11-29 12:03:21.873140] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.412 [2024-11-29 12:03:21.873151] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.412 [2024-11-29 12:03:21.873156] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873160] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.412 [2024-11-29 12:03:21.873173] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873178] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873182] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.412 [2024-11-29 12:03:21.873190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.412 [2024-11-29 12:03:21.873209] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.412 [2024-11-29 12:03:21.873278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.412 [2024-11-29 12:03:21.873295] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.412 [2024-11-29 12:03:21.873300] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873305] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.412 [2024-11-29 12:03:21.873317] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873322] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873327] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.412 [2024-11-29 12:03:21.873334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.412 [2024-11-29 12:03:21.873354] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.412 [2024-11-29 12:03:21.873412] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.412 [2024-11-29 12:03:21.873419] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.412 [2024-11-29 12:03:21.873423] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873427] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.412 [2024-11-29 12:03:21.873439] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873444] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873448] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.412 [2024-11-29 12:03:21.873456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.412 [2024-11-29 12:03:21.873475] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.412 [2024-11-29 12:03:21.873555] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.412 [2024-11-29 12:03:21.873563] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.412 [2024-11-29 12:03:21.873567] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873571] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.412 [2024-11-29 12:03:21.873584] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873589] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873593] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.412 [2024-11-29 12:03:21.873601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.412 [2024-11-29 12:03:21.873621] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.412 [2024-11-29 12:03:21.873684] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.412 [2024-11-29 12:03:21.873691] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.412 [2024-11-29 12:03:21.873695] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873699] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.412 [2024-11-29 12:03:21.873711] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873716] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873720] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.412 [2024-11-29 12:03:21.873728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.412 [2024-11-29 12:03:21.873747] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.412 [2024-11-29 12:03:21.873801] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.412 [2024-11-29 12:03:21.873807] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.412 [2024-11-29 12:03:21.873811] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873815] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.412 [2024-11-29 12:03:21.873827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873832] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.412 [2024-11-29 12:03:21.873844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.412 [2024-11-29 12:03:21.873862] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.412 [2024-11-29 12:03:21.873929] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.412 [2024-11-29 12:03:21.873936] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.412 [2024-11-29 12:03:21.873940] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873944] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.412 [2024-11-29 12:03:21.873956] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873961] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.873965] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.412 [2024-11-29 12:03:21.873973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.412 [2024-11-29 12:03:21.873991] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.412 [2024-11-29 12:03:21.874051] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.412 [2024-11-29 12:03:21.874058] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.412 [2024-11-29 12:03:21.874062] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.874066] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.412 [2024-11-29 12:03:21.874078] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.874083] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.412 [2024-11-29 12:03:21.874088] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.412 [2024-11-29 12:03:21.874095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.412 [2024-11-29 12:03:21.874114] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.413 [2024-11-29 12:03:21.874175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.413 [2024-11-29 12:03:21.874187] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.413 [2024-11-29 12:03:21.874191] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.413 [2024-11-29 12:03:21.874195] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.413 [2024-11-29 12:03:21.874207] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.413 [2024-11-29 12:03:21.874212] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.413 [2024-11-29 12:03:21.874216] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.413 [2024-11-29 12:03:21.874224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.413 [2024-11-29 12:03:21.874243] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.413 [2024-11-29 12:03:21.874307] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.413 [2024-11-29 12:03:21.874318] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.413 [2024-11-29 12:03:21.874322] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.413 [2024-11-29 12:03:21.874326] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.413 [2024-11-29 12:03:21.874338] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.413 [2024-11-29 12:03:21.874343] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.413 [2024-11-29 12:03:21.874348] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.413 [2024-11-29 12:03:21.874355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.413 [2024-11-29 12:03:21.874374] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.413 [2024-11-29 12:03:21.874433] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.413 [2024-11-29 12:03:21.874440] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.413 [2024-11-29 12:03:21.874443] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.413 [2024-11-29 12:03:21.874448] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.413 [2024-11-29 12:03:21.874460] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.413 [2024-11-29 12:03:21.874465] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.413 [2024-11-29 12:03:21.874469] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.413 [2024-11-29 12:03:21.874476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.413 [2024-11-29 12:03:21.874495] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.413 [2024-11-29 12:03:21.878525] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.413 [2024-11-29 12:03:21.878547] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.413 [2024-11-29 12:03:21.878552] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.413 [2024-11-29 12:03:21.878557] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.413 [2024-11-29 12:03:21.878572] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:16.413 [2024-11-29 12:03:21.878578] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:16.413 [2024-11-29 12:03:21.878582] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2079540) 00:19:16.413 [2024-11-29 12:03:21.878591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.413 [2024-11-29 12:03:21.878617] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b2640, cid 3, qid 0 00:19:16.413 [2024-11-29 12:03:21.878688] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:16.413 [2024-11-29 12:03:21.878695] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:16.413 [2024-11-29 12:03:21.878699] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:16.413 [2024-11-29 12:03:21.878703] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20b2640) on tqpair=0x2079540 00:19:16.413 [2024-11-29 12:03:21.878713] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:19:16.413 0 Kelvin (-273 Celsius) 00:19:16.413 Available Spare: 0% 00:19:16.413 Available Spare Threshold: 0% 00:19:16.413 Life Percentage Used: 0% 00:19:16.413 Data Units Read: 0 00:19:16.413 Data Units Written: 0 00:19:16.413 Host Read Commands: 0 00:19:16.413 Host Write Commands: 0 00:19:16.413 Controller Busy Time: 0 minutes 00:19:16.413 Power Cycles: 0 00:19:16.413 Power On Hours: 0 hours 00:19:16.413 Unsafe Shutdowns: 0 00:19:16.413 Unrecoverable Media Errors: 0 00:19:16.413 Lifetime Error Log Entries: 0 00:19:16.413 Warning Temperature Time: 0 minutes 00:19:16.413 Critical Temperature Time: 0 minutes 00:19:16.413 00:19:16.413 Number of Queues 00:19:16.413 ================ 00:19:16.413 Number of I/O Submission Queues: 127 00:19:16.413 Number of I/O Completion Queues: 127 00:19:16.413 00:19:16.413 Active Namespaces 00:19:16.413 ================= 00:19:16.413 Namespace ID:1 00:19:16.413 Error Recovery Timeout: Unlimited 00:19:16.413 Command Set Identifier: NVM (00h) 00:19:16.413 Deallocate: Supported 00:19:16.413 Deallocated/Unwritten Error: Not Supported 00:19:16.413 Deallocated Read Value: Unknown 00:19:16.413 Deallocate in Write Zeroes: Not Supported 00:19:16.413 Deallocated Guard Field: 0xFFFF 00:19:16.413 Flush: Supported 00:19:16.413 Reservation: Supported 00:19:16.413 Namespace Sharing Capabilities: Multiple Controllers 00:19:16.413 Size (in LBAs): 131072 (0GiB) 00:19:16.413 Capacity (in LBAs): 131072 (0GiB) 00:19:16.413 Utilization (in LBAs): 131072 (0GiB) 00:19:16.413 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:16.413 EUI64: ABCDEF0123456789 00:19:16.413 UUID: 82c9f4ae-7567-43a9-a041-deb3ebe0d62b 00:19:16.413 Thin Provisioning: Not Supported 00:19:16.413 Per-NS Atomic Units: Yes 00:19:16.413 Atomic Boundary Size (Normal): 0 00:19:16.413 Atomic Boundary Size (PFail): 0 00:19:16.413 Atomic Boundary Offset: 0 00:19:16.413 Maximum Single Source Range Length: 65535 00:19:16.413 Maximum Copy Length: 65535 00:19:16.413 Maximum Source Range Count: 1 00:19:16.413 NGUID/EUI64 Never Reused: No 00:19:16.413 Namespace Write Protected: No 00:19:16.413 Number of LBA Formats: 1 00:19:16.413 Current LBA Format: LBA Format #00 00:19:16.413 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:16.413 00:19:16.413 12:03:21 -- host/identify.sh@51 -- # sync 00:19:16.673 12:03:21 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.673 12:03:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.673 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:19:16.673 12:03:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.673 12:03:21 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:16.673 12:03:21 -- host/identify.sh@56 -- # nvmftestfini 00:19:16.673 12:03:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:16.673 12:03:21 -- nvmf/common.sh@116 -- # sync 00:19:16.673 12:03:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:16.673 12:03:21 -- nvmf/common.sh@119 -- # set +e 00:19:16.673 12:03:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:16.673 12:03:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:16.673 rmmod nvme_tcp 00:19:16.673 rmmod nvme_fabrics 00:19:16.673 rmmod nvme_keyring 00:19:16.674 12:03:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:16.674 12:03:22 -- nvmf/common.sh@123 -- # set -e 00:19:16.674 12:03:22 -- nvmf/common.sh@124 -- # return 0 00:19:16.674 12:03:22 -- nvmf/common.sh@477 -- # '[' -n 80872 ']' 00:19:16.674 12:03:22 -- nvmf/common.sh@478 -- # killprocess 80872 00:19:16.674 12:03:22 -- common/autotest_common.sh@936 -- # '[' -z 80872 ']' 00:19:16.674 12:03:22 -- common/autotest_common.sh@940 -- # kill -0 80872 00:19:16.674 12:03:22 -- common/autotest_common.sh@941 -- # uname 00:19:16.674 12:03:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:16.674 12:03:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80872 00:19:16.674 12:03:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:16.674 12:03:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:16.674 killing process with pid 80872 00:19:16.674 12:03:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80872' 00:19:16.674 12:03:22 -- common/autotest_common.sh@955 -- # kill 80872 00:19:16.674 [2024-11-29 12:03:22.086649] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:16.674 12:03:22 -- common/autotest_common.sh@960 -- # wait 80872 00:19:17.028 12:03:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:17.028 12:03:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:17.028 12:03:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:17.028 12:03:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:17.028 12:03:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:17.028 12:03:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.028 12:03:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.028 12:03:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.028 12:03:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:17.028 00:19:17.028 real 0m2.777s 00:19:17.028 user 0m7.491s 00:19:17.028 sys 0m0.763s 00:19:17.028 12:03:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:17.028 12:03:22 -- common/autotest_common.sh@10 -- # set +x 00:19:17.028 ************************************ 00:19:17.028 END TEST nvmf_identify 00:19:17.028 ************************************ 00:19:17.028 12:03:22 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:17.028 12:03:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:17.028 12:03:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:17.028 12:03:22 -- common/autotest_common.sh@10 -- # set +x 00:19:17.028 ************************************ 00:19:17.028 START TEST nvmf_perf 00:19:17.028 ************************************ 00:19:17.028 12:03:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:17.288 * Looking for test storage... 00:19:17.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:17.288 12:03:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:17.288 12:03:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:17.288 12:03:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:17.288 12:03:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:17.288 12:03:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:17.288 12:03:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:17.288 12:03:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:17.288 12:03:22 -- scripts/common.sh@335 -- # IFS=.-: 00:19:17.288 12:03:22 -- scripts/common.sh@335 -- # read -ra ver1 00:19:17.288 12:03:22 -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.288 12:03:22 -- scripts/common.sh@336 -- # read -ra ver2 00:19:17.288 12:03:22 -- scripts/common.sh@337 -- # local 'op=<' 00:19:17.288 12:03:22 -- scripts/common.sh@339 -- # ver1_l=2 00:19:17.288 12:03:22 -- scripts/common.sh@340 -- # ver2_l=1 00:19:17.288 12:03:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:17.288 12:03:22 -- scripts/common.sh@343 -- # case "$op" in 00:19:17.288 12:03:22 -- scripts/common.sh@344 -- # : 1 00:19:17.288 12:03:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:17.288 12:03:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.288 12:03:22 -- scripts/common.sh@364 -- # decimal 1 00:19:17.288 12:03:22 -- scripts/common.sh@352 -- # local d=1 00:19:17.288 12:03:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.288 12:03:22 -- scripts/common.sh@354 -- # echo 1 00:19:17.288 12:03:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:17.288 12:03:22 -- scripts/common.sh@365 -- # decimal 2 00:19:17.288 12:03:22 -- scripts/common.sh@352 -- # local d=2 00:19:17.288 12:03:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.288 12:03:22 -- scripts/common.sh@354 -- # echo 2 00:19:17.288 12:03:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:17.288 12:03:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:17.288 12:03:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:17.288 12:03:22 -- scripts/common.sh@367 -- # return 0 00:19:17.288 12:03:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.288 12:03:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:17.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.288 --rc genhtml_branch_coverage=1 00:19:17.288 --rc genhtml_function_coverage=1 00:19:17.288 --rc genhtml_legend=1 00:19:17.288 --rc geninfo_all_blocks=1 00:19:17.288 --rc geninfo_unexecuted_blocks=1 00:19:17.288 00:19:17.288 ' 00:19:17.288 12:03:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:17.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.288 --rc genhtml_branch_coverage=1 00:19:17.288 --rc genhtml_function_coverage=1 00:19:17.288 --rc genhtml_legend=1 00:19:17.288 --rc geninfo_all_blocks=1 00:19:17.288 --rc geninfo_unexecuted_blocks=1 00:19:17.288 00:19:17.288 ' 00:19:17.288 12:03:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:17.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.288 --rc genhtml_branch_coverage=1 00:19:17.288 --rc genhtml_function_coverage=1 00:19:17.288 --rc genhtml_legend=1 00:19:17.288 --rc geninfo_all_blocks=1 00:19:17.288 --rc geninfo_unexecuted_blocks=1 00:19:17.288 00:19:17.288 ' 00:19:17.288 12:03:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:17.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.288 --rc genhtml_branch_coverage=1 00:19:17.288 --rc genhtml_function_coverage=1 00:19:17.288 --rc genhtml_legend=1 00:19:17.288 --rc geninfo_all_blocks=1 00:19:17.288 --rc geninfo_unexecuted_blocks=1 00:19:17.288 00:19:17.288 ' 00:19:17.288 12:03:22 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:17.288 12:03:22 -- nvmf/common.sh@7 -- # uname -s 00:19:17.288 12:03:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.288 12:03:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.288 12:03:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.288 12:03:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.288 12:03:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.288 12:03:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.288 12:03:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.288 12:03:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.288 12:03:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.288 12:03:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.288 12:03:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:19:17.288 12:03:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:19:17.288 12:03:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.288 12:03:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.288 12:03:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:17.288 12:03:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:17.288 12:03:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.288 12:03:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.288 12:03:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.288 12:03:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.288 12:03:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.288 12:03:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.288 12:03:22 -- paths/export.sh@5 -- # export PATH 00:19:17.288 12:03:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.288 12:03:22 -- nvmf/common.sh@46 -- # : 0 00:19:17.288 12:03:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:17.288 12:03:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:17.288 12:03:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:17.288 12:03:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.288 12:03:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.288 12:03:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:17.288 12:03:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:17.288 12:03:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:17.289 12:03:22 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:17.289 12:03:22 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:17.289 12:03:22 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:17.289 12:03:22 -- host/perf.sh@17 -- # nvmftestinit 00:19:17.289 12:03:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:17.289 12:03:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.289 12:03:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:17.289 12:03:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:17.289 12:03:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:17.289 12:03:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.289 12:03:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.289 12:03:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.289 12:03:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:17.289 12:03:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:17.289 12:03:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:17.289 12:03:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:17.289 12:03:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:17.289 12:03:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:17.289 12:03:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.289 12:03:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.289 12:03:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:17.289 12:03:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:17.289 12:03:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:17.289 12:03:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:17.289 12:03:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:17.289 12:03:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.289 12:03:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:17.289 12:03:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:17.289 12:03:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:17.289 12:03:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:17.289 12:03:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:17.289 12:03:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:17.289 Cannot find device "nvmf_tgt_br" 00:19:17.289 12:03:22 -- nvmf/common.sh@154 -- # true 00:19:17.289 12:03:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:17.289 Cannot find device "nvmf_tgt_br2" 00:19:17.289 12:03:22 -- nvmf/common.sh@155 -- # true 00:19:17.289 12:03:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:17.289 12:03:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:17.289 Cannot find device "nvmf_tgt_br" 00:19:17.548 12:03:22 -- nvmf/common.sh@157 -- # true 00:19:17.548 12:03:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:17.548 Cannot find device "nvmf_tgt_br2" 00:19:17.548 12:03:22 -- nvmf/common.sh@158 -- # true 00:19:17.548 12:03:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:17.548 12:03:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:17.548 12:03:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:17.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.548 12:03:22 -- nvmf/common.sh@161 -- # true 00:19:17.548 12:03:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:17.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.548 12:03:22 -- nvmf/common.sh@162 -- # true 00:19:17.548 12:03:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:17.548 12:03:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:17.548 12:03:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:17.548 12:03:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:17.548 12:03:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:17.548 12:03:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:17.548 12:03:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:17.548 12:03:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:17.548 12:03:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:17.548 12:03:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:17.548 12:03:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:17.548 12:03:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:17.548 12:03:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:17.548 12:03:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:17.548 12:03:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:17.548 12:03:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:17.548 12:03:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:17.548 12:03:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:17.549 12:03:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:17.549 12:03:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:17.807 12:03:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:17.807 12:03:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:17.807 12:03:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:17.807 12:03:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:17.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:19:17.807 00:19:17.807 --- 10.0.0.2 ping statistics --- 00:19:17.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.807 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:19:17.807 12:03:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:17.807 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:17.807 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:19:17.807 00:19:17.807 --- 10.0.0.3 ping statistics --- 00:19:17.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.807 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:19:17.807 12:03:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:17.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:19:17.807 00:19:17.807 --- 10.0.0.1 ping statistics --- 00:19:17.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.808 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:17.808 12:03:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.808 12:03:23 -- nvmf/common.sh@421 -- # return 0 00:19:17.808 12:03:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:17.808 12:03:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.808 12:03:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:17.808 12:03:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:17.808 12:03:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.808 12:03:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:17.808 12:03:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:17.808 12:03:23 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:17.808 12:03:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:17.808 12:03:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:17.808 12:03:23 -- common/autotest_common.sh@10 -- # set +x 00:19:17.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.808 12:03:23 -- nvmf/common.sh@469 -- # nvmfpid=81090 00:19:17.808 12:03:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:17.808 12:03:23 -- nvmf/common.sh@470 -- # waitforlisten 81090 00:19:17.808 12:03:23 -- common/autotest_common.sh@829 -- # '[' -z 81090 ']' 00:19:17.808 12:03:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.808 12:03:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.808 12:03:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.808 12:03:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.808 12:03:23 -- common/autotest_common.sh@10 -- # set +x 00:19:17.808 [2024-11-29 12:03:23.168391] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:17.808 [2024-11-29 12:03:23.168824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.808 [2024-11-29 12:03:23.309212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:18.066 [2024-11-29 12:03:23.419863] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:18.066 [2024-11-29 12:03:23.420342] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.066 [2024-11-29 12:03:23.420572] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.066 [2024-11-29 12:03:23.420823] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.066 [2024-11-29 12:03:23.421344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.066 [2024-11-29 12:03:23.421454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.066 [2024-11-29 12:03:23.421527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:18.066 [2024-11-29 12:03:23.421530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.634 12:03:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.634 12:03:24 -- common/autotest_common.sh@862 -- # return 0 00:19:18.634 12:03:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:18.634 12:03:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:18.634 12:03:24 -- common/autotest_common.sh@10 -- # set +x 00:19:18.893 12:03:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.893 12:03:24 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:18.893 12:03:24 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:19:19.152 12:03:24 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:19:19.152 12:03:24 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:19.410 12:03:24 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:19:19.410 12:03:24 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:19.669 12:03:25 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:19.669 12:03:25 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:19:19.669 12:03:25 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:19.669 12:03:25 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:19.669 12:03:25 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:20.236 [2024-11-29 12:03:25.452490] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.236 12:03:25 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:20.494 12:03:25 -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:20.494 12:03:25 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:20.752 12:03:26 -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:20.752 12:03:26 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:21.010 12:03:26 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.269 [2024-11-29 12:03:26.591184] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.269 12:03:26 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:21.572 12:03:26 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:19:21.572 12:03:26 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:19:21.572 12:03:26 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:21.572 12:03:26 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:19:22.543 Initializing NVMe Controllers 00:19:22.543 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:19:22.543 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:19:22.543 Initialization complete. Launching workers. 00:19:22.543 ======================================================== 00:19:22.543 Latency(us) 00:19:22.543 Device Information : IOPS MiB/s Average min max 00:19:22.543 PCIE (0000:00:06.0) NSID 1 from core 0: 24287.98 94.87 1317.37 323.29 7515.58 00:19:22.543 ======================================================== 00:19:22.543 Total : 24287.98 94.87 1317.37 323.29 7515.58 00:19:22.543 00:19:22.543 12:03:28 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:23.921 Initializing NVMe Controllers 00:19:23.921 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:23.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:23.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:23.921 Initialization complete. Launching workers. 00:19:23.921 ======================================================== 00:19:23.921 Latency(us) 00:19:23.921 Device Information : IOPS MiB/s Average min max 00:19:23.921 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3004.96 11.74 331.13 116.93 4616.68 00:19:23.921 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8119.88 3983.75 12049.29 00:19:23.921 ======================================================== 00:19:23.921 Total : 3128.96 12.22 639.79 116.93 12049.29 00:19:23.921 00:19:23.921 12:03:29 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:25.295 Initializing NVMe Controllers 00:19:25.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:25.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:25.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:25.295 Initialization complete. Launching workers. 00:19:25.295 ======================================================== 00:19:25.295 Latency(us) 00:19:25.295 Device Information : IOPS MiB/s Average min max 00:19:25.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8362.96 32.67 3826.12 477.83 10281.48 00:19:25.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3899.92 15.23 8235.36 4632.93 18475.52 00:19:25.295 ======================================================== 00:19:25.295 Total : 12262.88 47.90 5228.38 477.83 18475.52 00:19:25.295 00:19:25.295 12:03:30 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:19:25.295 12:03:30 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:28.574 Initializing NVMe Controllers 00:19:28.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:28.574 Controller IO queue size 128, less than required. 00:19:28.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:28.574 Controller IO queue size 128, less than required. 00:19:28.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:28.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:28.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:28.574 Initialization complete. Launching workers. 00:19:28.574 ======================================================== 00:19:28.574 Latency(us) 00:19:28.574 Device Information : IOPS MiB/s Average min max 00:19:28.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1553.83 388.46 83602.38 38456.91 142889.26 00:19:28.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 641.22 160.31 213512.85 72500.86 372643.24 00:19:28.574 ======================================================== 00:19:28.574 Total : 2195.05 548.76 121552.09 38456.91 372643.24 00:19:28.574 00:19:28.574 12:03:33 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:19:28.574 No valid NVMe controllers or AIO or URING devices found 00:19:28.574 Initializing NVMe Controllers 00:19:28.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:28.574 Controller IO queue size 128, less than required. 00:19:28.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:28.574 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:28.574 Controller IO queue size 128, less than required. 00:19:28.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:28.574 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:19:28.574 WARNING: Some requested NVMe devices were skipped 00:19:28.574 12:03:33 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:19:31.099 Initializing NVMe Controllers 00:19:31.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:31.099 Controller IO queue size 128, less than required. 00:19:31.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:31.099 Controller IO queue size 128, less than required. 00:19:31.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:31.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:31.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:31.099 Initialization complete. Launching workers. 00:19:31.099 00:19:31.099 ==================== 00:19:31.099 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:31.099 TCP transport: 00:19:31.099 polls: 7264 00:19:31.099 idle_polls: 0 00:19:31.099 sock_completions: 7264 00:19:31.099 nvme_completions: 5491 00:19:31.099 submitted_requests: 8417 00:19:31.099 queued_requests: 1 00:19:31.099 00:19:31.099 ==================== 00:19:31.099 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:31.099 TCP transport: 00:19:31.099 polls: 7318 00:19:31.099 idle_polls: 0 00:19:31.099 sock_completions: 7318 00:19:31.099 nvme_completions: 5522 00:19:31.099 submitted_requests: 8442 00:19:31.099 queued_requests: 1 00:19:31.099 ======================================================== 00:19:31.099 Latency(us) 00:19:31.099 Device Information : IOPS MiB/s Average min max 00:19:31.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1433.58 358.40 91858.89 44416.19 145707.41 00:19:31.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1441.07 360.27 89120.78 38245.38 148777.90 00:19:31.099 ======================================================== 00:19:31.099 Total : 2874.65 718.66 90486.27 38245.38 148777.90 00:19:31.099 00:19:31.099 12:03:36 -- host/perf.sh@66 -- # sync 00:19:31.099 12:03:36 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:31.099 12:03:36 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:19:31.099 12:03:36 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:19:31.099 12:03:36 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:19:31.358 12:03:36 -- host/perf.sh@72 -- # ls_guid=20c4d059-6f8f-492c-a295-1e413a65e7ed 00:19:31.358 12:03:36 -- host/perf.sh@73 -- # get_lvs_free_mb 20c4d059-6f8f-492c-a295-1e413a65e7ed 00:19:31.358 12:03:36 -- common/autotest_common.sh@1353 -- # local lvs_uuid=20c4d059-6f8f-492c-a295-1e413a65e7ed 00:19:31.358 12:03:36 -- common/autotest_common.sh@1354 -- # local lvs_info 00:19:31.358 12:03:36 -- common/autotest_common.sh@1355 -- # local fc 00:19:31.358 12:03:36 -- common/autotest_common.sh@1356 -- # local cs 00:19:31.358 12:03:36 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:31.618 12:03:37 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:19:31.618 { 00:19:31.618 "uuid": "20c4d059-6f8f-492c-a295-1e413a65e7ed", 00:19:31.618 "name": "lvs_0", 00:19:31.618 "base_bdev": "Nvme0n1", 00:19:31.618 "total_data_clusters": 1278, 00:19:31.618 "free_clusters": 1278, 00:19:31.618 "block_size": 4096, 00:19:31.618 "cluster_size": 4194304 00:19:31.618 } 00:19:31.618 ]' 00:19:31.618 12:03:37 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="20c4d059-6f8f-492c-a295-1e413a65e7ed") .free_clusters' 00:19:31.888 12:03:37 -- common/autotest_common.sh@1358 -- # fc=1278 00:19:31.888 12:03:37 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="20c4d059-6f8f-492c-a295-1e413a65e7ed") .cluster_size' 00:19:31.888 12:03:37 -- common/autotest_common.sh@1359 -- # cs=4194304 00:19:31.888 5112 00:19:31.888 12:03:37 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:19:31.888 12:03:37 -- common/autotest_common.sh@1363 -- # echo 5112 00:19:31.888 12:03:37 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:19:31.888 12:03:37 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 20c4d059-6f8f-492c-a295-1e413a65e7ed lbd_0 5112 00:19:32.147 12:03:37 -- host/perf.sh@80 -- # lb_guid=ceb7d73f-66b4-4ecf-babe-ca534e007d2b 00:19:32.147 12:03:37 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore ceb7d73f-66b4-4ecf-babe-ca534e007d2b lvs_n_0 00:19:32.407 12:03:37 -- host/perf.sh@83 -- # ls_nested_guid=8b5d28ea-cbdf-4957-96ac-3a64e6af6672 00:19:32.407 12:03:37 -- host/perf.sh@84 -- # get_lvs_free_mb 8b5d28ea-cbdf-4957-96ac-3a64e6af6672 00:19:32.407 12:03:37 -- common/autotest_common.sh@1353 -- # local lvs_uuid=8b5d28ea-cbdf-4957-96ac-3a64e6af6672 00:19:32.407 12:03:37 -- common/autotest_common.sh@1354 -- # local lvs_info 00:19:32.407 12:03:37 -- common/autotest_common.sh@1355 -- # local fc 00:19:32.407 12:03:37 -- common/autotest_common.sh@1356 -- # local cs 00:19:32.407 12:03:37 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:32.667 12:03:38 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:19:32.667 { 00:19:32.667 "uuid": "20c4d059-6f8f-492c-a295-1e413a65e7ed", 00:19:32.667 "name": "lvs_0", 00:19:32.667 "base_bdev": "Nvme0n1", 00:19:32.667 "total_data_clusters": 1278, 00:19:32.667 "free_clusters": 0, 00:19:32.667 "block_size": 4096, 00:19:32.667 "cluster_size": 4194304 00:19:32.667 }, 00:19:32.667 { 00:19:32.667 "uuid": "8b5d28ea-cbdf-4957-96ac-3a64e6af6672", 00:19:32.667 "name": "lvs_n_0", 00:19:32.667 "base_bdev": "ceb7d73f-66b4-4ecf-babe-ca534e007d2b", 00:19:32.667 "total_data_clusters": 1276, 00:19:32.667 "free_clusters": 1276, 00:19:32.667 "block_size": 4096, 00:19:32.667 "cluster_size": 4194304 00:19:32.667 } 00:19:32.667 ]' 00:19:32.667 12:03:38 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="8b5d28ea-cbdf-4957-96ac-3a64e6af6672") .free_clusters' 00:19:32.667 12:03:38 -- common/autotest_common.sh@1358 -- # fc=1276 00:19:32.667 12:03:38 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="8b5d28ea-cbdf-4957-96ac-3a64e6af6672") .cluster_size' 00:19:32.667 5104 00:19:32.667 12:03:38 -- common/autotest_common.sh@1359 -- # cs=4194304 00:19:32.667 12:03:38 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:19:32.667 12:03:38 -- common/autotest_common.sh@1363 -- # echo 5104 00:19:32.667 12:03:38 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:19:32.926 12:03:38 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b5d28ea-cbdf-4957-96ac-3a64e6af6672 lbd_nest_0 5104 00:19:32.926 12:03:38 -- host/perf.sh@88 -- # lb_nested_guid=fbb67e0f-a99e-4f31-a2bd-f09388b8ce6a 00:19:32.926 12:03:38 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:33.493 12:03:38 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:19:33.493 12:03:38 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 fbb67e0f-a99e-4f31-a2bd-f09388b8ce6a 00:19:33.493 12:03:38 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:33.751 12:03:39 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:19:33.751 12:03:39 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:19:33.751 12:03:39 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:19:33.751 12:03:39 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:33.751 12:03:39 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:34.367 No valid NVMe controllers or AIO or URING devices found 00:19:34.367 Initializing NVMe Controllers 00:19:34.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:34.367 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:19:34.367 WARNING: Some requested NVMe devices were skipped 00:19:34.367 12:03:39 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:34.367 12:03:39 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:44.358 Initializing NVMe Controllers 00:19:44.358 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:44.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:44.358 Initialization complete. Launching workers. 00:19:44.358 ======================================================== 00:19:44.358 Latency(us) 00:19:44.358 Device Information : IOPS MiB/s Average min max 00:19:44.358 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 853.23 106.65 1171.18 376.88 8625.10 00:19:44.358 ======================================================== 00:19:44.358 Total : 853.23 106.65 1171.18 376.88 8625.10 00:19:44.358 00:19:44.358 12:03:49 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:19:44.358 12:03:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:44.358 12:03:49 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:44.616 No valid NVMe controllers or AIO or URING devices found 00:19:44.875 Initializing NVMe Controllers 00:19:44.875 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:44.875 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:19:44.875 WARNING: Some requested NVMe devices were skipped 00:19:44.875 12:03:50 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:44.875 12:03:50 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:57.085 Initializing NVMe Controllers 00:19:57.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:57.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:57.086 Initialization complete. Launching workers. 00:19:57.086 ======================================================== 00:19:57.086 Latency(us) 00:19:57.086 Device Information : IOPS MiB/s Average min max 00:19:57.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1337.22 167.15 23959.31 6405.14 68077.69 00:19:57.086 ======================================================== 00:19:57.086 Total : 1337.22 167.15 23959.31 6405.14 68077.69 00:19:57.086 00:19:57.086 12:04:00 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:19:57.086 12:04:00 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:57.086 12:04:00 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:57.086 No valid NVMe controllers or AIO or URING devices found 00:19:57.086 Initializing NVMe Controllers 00:19:57.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:57.086 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:19:57.086 WARNING: Some requested NVMe devices were skipped 00:19:57.086 12:04:00 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:57.086 12:04:00 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:07.063 Initializing NVMe Controllers 00:20:07.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:07.063 Controller IO queue size 128, less than required. 00:20:07.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:07.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:07.063 Initialization complete. Launching workers. 00:20:07.063 ======================================================== 00:20:07.063 Latency(us) 00:20:07.063 Device Information : IOPS MiB/s Average min max 00:20:07.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3836.72 479.59 33401.37 12895.51 85877.26 00:20:07.063 ======================================================== 00:20:07.063 Total : 3836.72 479.59 33401.37 12895.51 85877.26 00:20:07.063 00:20:07.063 12:04:11 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:07.063 12:04:11 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fbb67e0f-a99e-4f31-a2bd-f09388b8ce6a 00:20:07.063 12:04:11 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:07.063 12:04:12 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ceb7d73f-66b4-4ecf-babe-ca534e007d2b 00:20:07.063 12:04:12 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:07.063 12:04:12 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:07.063 12:04:12 -- host/perf.sh@114 -- # nvmftestfini 00:20:07.063 12:04:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:07.063 12:04:12 -- nvmf/common.sh@116 -- # sync 00:20:07.323 12:04:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:07.323 12:04:12 -- nvmf/common.sh@119 -- # set +e 00:20:07.323 12:04:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:07.323 12:04:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:07.323 rmmod nvme_tcp 00:20:07.323 rmmod nvme_fabrics 00:20:07.323 rmmod nvme_keyring 00:20:07.323 12:04:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:07.323 12:04:12 -- nvmf/common.sh@123 -- # set -e 00:20:07.323 12:04:12 -- nvmf/common.sh@124 -- # return 0 00:20:07.323 12:04:12 -- nvmf/common.sh@477 -- # '[' -n 81090 ']' 00:20:07.323 12:04:12 -- nvmf/common.sh@478 -- # killprocess 81090 00:20:07.323 12:04:12 -- common/autotest_common.sh@936 -- # '[' -z 81090 ']' 00:20:07.323 12:04:12 -- common/autotest_common.sh@940 -- # kill -0 81090 00:20:07.323 12:04:12 -- common/autotest_common.sh@941 -- # uname 00:20:07.323 12:04:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:07.323 12:04:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81090 00:20:07.323 12:04:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:07.323 12:04:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:07.323 killing process with pid 81090 00:20:07.323 12:04:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81090' 00:20:07.323 12:04:12 -- common/autotest_common.sh@955 -- # kill 81090 00:20:07.323 12:04:12 -- common/autotest_common.sh@960 -- # wait 81090 00:20:08.697 12:04:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:08.697 12:04:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:08.697 12:04:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:08.697 12:04:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.697 12:04:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:08.697 12:04:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.697 12:04:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.697 12:04:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.697 12:04:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:08.697 00:20:08.697 real 0m51.409s 00:20:08.697 user 3m12.927s 00:20:08.697 sys 0m13.079s 00:20:08.697 12:04:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:08.697 12:04:13 -- common/autotest_common.sh@10 -- # set +x 00:20:08.697 ************************************ 00:20:08.697 END TEST nvmf_perf 00:20:08.697 ************************************ 00:20:08.697 12:04:13 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:08.697 12:04:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:08.697 12:04:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:08.697 12:04:13 -- common/autotest_common.sh@10 -- # set +x 00:20:08.697 ************************************ 00:20:08.697 START TEST nvmf_fio_host 00:20:08.697 ************************************ 00:20:08.697 12:04:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:08.697 * Looking for test storage... 00:20:08.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:08.697 12:04:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:08.697 12:04:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:08.697 12:04:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:08.697 12:04:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:08.697 12:04:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:08.697 12:04:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:08.697 12:04:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:08.697 12:04:14 -- scripts/common.sh@335 -- # IFS=.-: 00:20:08.697 12:04:14 -- scripts/common.sh@335 -- # read -ra ver1 00:20:08.697 12:04:14 -- scripts/common.sh@336 -- # IFS=.-: 00:20:08.697 12:04:14 -- scripts/common.sh@336 -- # read -ra ver2 00:20:08.697 12:04:14 -- scripts/common.sh@337 -- # local 'op=<' 00:20:08.697 12:04:14 -- scripts/common.sh@339 -- # ver1_l=2 00:20:08.697 12:04:14 -- scripts/common.sh@340 -- # ver2_l=1 00:20:08.697 12:04:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:08.697 12:04:14 -- scripts/common.sh@343 -- # case "$op" in 00:20:08.697 12:04:14 -- scripts/common.sh@344 -- # : 1 00:20:08.697 12:04:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:08.697 12:04:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.697 12:04:14 -- scripts/common.sh@364 -- # decimal 1 00:20:08.697 12:04:14 -- scripts/common.sh@352 -- # local d=1 00:20:08.697 12:04:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:08.697 12:04:14 -- scripts/common.sh@354 -- # echo 1 00:20:08.697 12:04:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:08.697 12:04:14 -- scripts/common.sh@365 -- # decimal 2 00:20:08.697 12:04:14 -- scripts/common.sh@352 -- # local d=2 00:20:08.697 12:04:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:08.697 12:04:14 -- scripts/common.sh@354 -- # echo 2 00:20:08.697 12:04:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:08.697 12:04:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:08.697 12:04:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:08.697 12:04:14 -- scripts/common.sh@367 -- # return 0 00:20:08.697 12:04:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:08.697 12:04:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:08.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.697 --rc genhtml_branch_coverage=1 00:20:08.697 --rc genhtml_function_coverage=1 00:20:08.697 --rc genhtml_legend=1 00:20:08.697 --rc geninfo_all_blocks=1 00:20:08.697 --rc geninfo_unexecuted_blocks=1 00:20:08.697 00:20:08.697 ' 00:20:08.697 12:04:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:08.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.697 --rc genhtml_branch_coverage=1 00:20:08.697 --rc genhtml_function_coverage=1 00:20:08.697 --rc genhtml_legend=1 00:20:08.697 --rc geninfo_all_blocks=1 00:20:08.697 --rc geninfo_unexecuted_blocks=1 00:20:08.697 00:20:08.697 ' 00:20:08.697 12:04:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:08.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.697 --rc genhtml_branch_coverage=1 00:20:08.697 --rc genhtml_function_coverage=1 00:20:08.697 --rc genhtml_legend=1 00:20:08.697 --rc geninfo_all_blocks=1 00:20:08.697 --rc geninfo_unexecuted_blocks=1 00:20:08.697 00:20:08.697 ' 00:20:08.697 12:04:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:08.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.697 --rc genhtml_branch_coverage=1 00:20:08.697 --rc genhtml_function_coverage=1 00:20:08.697 --rc genhtml_legend=1 00:20:08.697 --rc geninfo_all_blocks=1 00:20:08.697 --rc geninfo_unexecuted_blocks=1 00:20:08.697 00:20:08.697 ' 00:20:08.697 12:04:14 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:08.697 12:04:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.697 12:04:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.697 12:04:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.697 12:04:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.697 12:04:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.697 12:04:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.697 12:04:14 -- paths/export.sh@5 -- # export PATH 00:20:08.697 12:04:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.697 12:04:14 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:08.697 12:04:14 -- nvmf/common.sh@7 -- # uname -s 00:20:08.697 12:04:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.697 12:04:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.697 12:04:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.697 12:04:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.697 12:04:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.697 12:04:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.697 12:04:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.697 12:04:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.697 12:04:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.697 12:04:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.697 12:04:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:20:08.697 12:04:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:20:08.697 12:04:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.697 12:04:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.697 12:04:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:08.697 12:04:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:08.697 12:04:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.697 12:04:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.697 12:04:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.698 12:04:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.698 12:04:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.698 12:04:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.698 12:04:14 -- paths/export.sh@5 -- # export PATH 00:20:08.698 12:04:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.698 12:04:14 -- nvmf/common.sh@46 -- # : 0 00:20:08.698 12:04:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:08.698 12:04:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:08.698 12:04:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:08.698 12:04:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.698 12:04:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.698 12:04:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:08.698 12:04:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:08.698 12:04:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:08.698 12:04:14 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:08.698 12:04:14 -- host/fio.sh@14 -- # nvmftestinit 00:20:08.698 12:04:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:08.698 12:04:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.698 12:04:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:08.698 12:04:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:08.698 12:04:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:08.698 12:04:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.698 12:04:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.698 12:04:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.698 12:04:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:08.698 12:04:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:08.698 12:04:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:08.698 12:04:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:08.698 12:04:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:08.698 12:04:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:08.698 12:04:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.698 12:04:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.698 12:04:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:08.698 12:04:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:08.698 12:04:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:08.698 12:04:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:08.698 12:04:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:08.698 12:04:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.698 12:04:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:08.698 12:04:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:08.698 12:04:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:08.698 12:04:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:08.698 12:04:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:08.955 12:04:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:08.955 Cannot find device "nvmf_tgt_br" 00:20:08.955 12:04:14 -- nvmf/common.sh@154 -- # true 00:20:08.955 12:04:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:08.955 Cannot find device "nvmf_tgt_br2" 00:20:08.955 12:04:14 -- nvmf/common.sh@155 -- # true 00:20:08.955 12:04:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:08.955 12:04:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:08.955 Cannot find device "nvmf_tgt_br" 00:20:08.955 12:04:14 -- nvmf/common.sh@157 -- # true 00:20:08.955 12:04:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:08.955 Cannot find device "nvmf_tgt_br2" 00:20:08.955 12:04:14 -- nvmf/common.sh@158 -- # true 00:20:08.955 12:04:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:08.955 12:04:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:08.955 12:04:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:08.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.955 12:04:14 -- nvmf/common.sh@161 -- # true 00:20:08.955 12:04:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:08.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.955 12:04:14 -- nvmf/common.sh@162 -- # true 00:20:08.955 12:04:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:08.955 12:04:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:08.955 12:04:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:08.955 12:04:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:08.955 12:04:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:08.955 12:04:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:08.955 12:04:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:08.955 12:04:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:08.955 12:04:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:08.955 12:04:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:08.955 12:04:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:08.955 12:04:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:08.955 12:04:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:08.955 12:04:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:08.955 12:04:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:08.955 12:04:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:08.955 12:04:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:08.955 12:04:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:08.955 12:04:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:09.213 12:04:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:09.213 12:04:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:09.213 12:04:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:09.213 12:04:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:09.213 12:04:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:09.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:20:09.213 00:20:09.213 --- 10.0.0.2 ping statistics --- 00:20:09.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.213 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:09.213 12:04:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:09.213 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:09.213 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:20:09.213 00:20:09.213 --- 10.0.0.3 ping statistics --- 00:20:09.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.213 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:09.213 12:04:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:09.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:09.213 00:20:09.213 --- 10.0.0.1 ping statistics --- 00:20:09.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.213 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:09.213 12:04:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.213 12:04:14 -- nvmf/common.sh@421 -- # return 0 00:20:09.213 12:04:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:09.213 12:04:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.213 12:04:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:09.213 12:04:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:09.213 12:04:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.213 12:04:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:09.213 12:04:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:09.213 12:04:14 -- host/fio.sh@16 -- # [[ y != y ]] 00:20:09.213 12:04:14 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:09.213 12:04:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:09.213 12:04:14 -- common/autotest_common.sh@10 -- # set +x 00:20:09.213 12:04:14 -- host/fio.sh@24 -- # nvmfpid=81930 00:20:09.213 12:04:14 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:09.213 12:04:14 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:09.213 12:04:14 -- host/fio.sh@28 -- # waitforlisten 81930 00:20:09.213 12:04:14 -- common/autotest_common.sh@829 -- # '[' -z 81930 ']' 00:20:09.213 12:04:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.213 12:04:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.213 12:04:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.213 12:04:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.213 12:04:14 -- common/autotest_common.sh@10 -- # set +x 00:20:09.213 [2024-11-29 12:04:14.598133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:09.213 [2024-11-29 12:04:14.598239] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.471 [2024-11-29 12:04:14.740580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:09.471 [2024-11-29 12:04:14.838826] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:09.471 [2024-11-29 12:04:14.838989] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.471 [2024-11-29 12:04:14.839004] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.471 [2024-11-29 12:04:14.839015] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.471 [2024-11-29 12:04:14.839189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.471 [2024-11-29 12:04:14.839676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.471 [2024-11-29 12:04:14.839810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:09.471 [2024-11-29 12:04:14.839866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.402 12:04:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.402 12:04:15 -- common/autotest_common.sh@862 -- # return 0 00:20:10.402 12:04:15 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:10.402 [2024-11-29 12:04:15.794033] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.402 12:04:15 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:10.402 12:04:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:10.402 12:04:15 -- common/autotest_common.sh@10 -- # set +x 00:20:10.402 12:04:15 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:10.969 Malloc1 00:20:10.969 12:04:16 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:11.227 12:04:16 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:11.485 12:04:16 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.743 [2024-11-29 12:04:17.047145] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.743 12:04:17 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:12.001 12:04:17 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:12.001 12:04:17 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:12.001 12:04:17 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:12.001 12:04:17 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:12.001 12:04:17 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:12.001 12:04:17 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:12.001 12:04:17 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:12.001 12:04:17 -- common/autotest_common.sh@1330 -- # shift 00:20:12.001 12:04:17 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:12.001 12:04:17 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:12.001 12:04:17 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:12.001 12:04:17 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:12.002 12:04:17 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:12.002 12:04:17 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:12.002 12:04:17 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:12.002 12:04:17 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:12.002 12:04:17 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:12.002 12:04:17 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:12.002 12:04:17 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:12.002 12:04:17 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:12.002 12:04:17 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:12.002 12:04:17 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:12.002 12:04:17 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:12.002 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:12.002 fio-3.35 00:20:12.002 Starting 1 thread 00:20:14.536 00:20:14.536 test: (groupid=0, jobs=1): err= 0: pid=82013: Fri Nov 29 12:04:19 2024 00:20:14.536 read: IOPS=8961, BW=35.0MiB/s (36.7MB/s)(70.3MiB/2007msec) 00:20:14.536 slat (nsec): min=1959, max=294328, avg=2645.42, stdev=3163.43 00:20:14.536 clat (usec): min=2307, max=13028, avg=7415.34, stdev=537.17 00:20:14.536 lat (usec): min=2355, max=13031, avg=7417.99, stdev=536.95 00:20:14.536 clat percentiles (usec): 00:20:14.536 | 1.00th=[ 6128], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 7046], 00:20:14.536 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7570], 00:20:14.536 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 00:20:14.536 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[10290], 99.95th=[11207], 00:20:14.536 | 99.99th=[13042] 00:20:14.536 bw ( KiB/s): min=35048, max=36504, per=100.00%, avg=35850.00, stdev=600.95, samples=4 00:20:14.536 iops : min= 8762, max= 9126, avg=8962.50, stdev=150.24, samples=4 00:20:14.536 write: IOPS=8980, BW=35.1MiB/s (36.8MB/s)(70.4MiB/2007msec); 0 zone resets 00:20:14.536 slat (usec): min=2, max=273, avg= 2.79, stdev= 2.38 00:20:14.536 clat (usec): min=2178, max=12417, avg=6789.41, stdev=515.70 00:20:14.536 lat (usec): min=2190, max=12420, avg=6792.20, stdev=515.69 00:20:14.536 clat percentiles (usec): 00:20:14.536 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6456], 00:20:14.536 | 30.00th=[ 6587], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6915], 00:20:14.536 | 70.00th=[ 6980], 80.00th=[ 7177], 90.00th=[ 7373], 95.00th=[ 7504], 00:20:14.536 | 99.00th=[ 7898], 99.50th=[ 8225], 99.90th=[10814], 99.95th=[11994], 00:20:14.536 | 99.99th=[12387] 00:20:14.536 bw ( KiB/s): min=35328, max=36616, per=99.98%, avg=35914.00, stdev=533.32, samples=4 00:20:14.536 iops : min= 8832, max= 9154, avg=8978.50, stdev=133.33, samples=4 00:20:14.536 lat (msec) : 4=0.12%, 10=99.73%, 20=0.15% 00:20:14.536 cpu : usr=67.75%, sys=24.18%, ctx=5, majf=0, minf=5 00:20:14.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:14.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:14.536 issued rwts: total=17986,18023,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.536 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:14.536 00:20:14.536 Run status group 0 (all jobs): 00:20:14.536 READ: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.3MiB (73.7MB), run=2007-2007msec 00:20:14.536 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.4MiB (73.8MB), run=2007-2007msec 00:20:14.536 12:04:19 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:14.536 12:04:19 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:14.536 12:04:19 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:14.536 12:04:19 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:14.536 12:04:19 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:14.536 12:04:19 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:14.536 12:04:19 -- common/autotest_common.sh@1330 -- # shift 00:20:14.536 12:04:19 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:14.536 12:04:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:14.536 12:04:19 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:14.536 12:04:19 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:14.536 12:04:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:14.536 12:04:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:14.536 12:04:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:14.536 12:04:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:14.536 12:04:19 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:14.536 12:04:19 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:14.536 12:04:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:14.536 12:04:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:14.536 12:04:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:14.536 12:04:19 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:14.536 12:04:19 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:14.536 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:14.536 fio-3.35 00:20:14.536 Starting 1 thread 00:20:17.068 00:20:17.068 test: (groupid=0, jobs=1): err= 0: pid=82062: Fri Nov 29 12:04:22 2024 00:20:17.068 read: IOPS=8253, BW=129MiB/s (135MB/s)(259MiB/2009msec) 00:20:17.068 slat (usec): min=3, max=132, avg= 4.06, stdev= 2.14 00:20:17.068 clat (usec): min=1844, max=18386, avg=8428.02, stdev=2440.38 00:20:17.068 lat (usec): min=1847, max=18389, avg=8432.08, stdev=2440.56 00:20:17.068 clat percentiles (usec): 00:20:17.068 | 1.00th=[ 4178], 5.00th=[ 4883], 10.00th=[ 5342], 20.00th=[ 6194], 00:20:17.068 | 30.00th=[ 6915], 40.00th=[ 7635], 50.00th=[ 8291], 60.00th=[ 8848], 00:20:17.068 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11338], 95.00th=[12518], 00:20:17.068 | 99.00th=[15664], 99.50th=[16319], 99.90th=[17433], 99.95th=[17957], 00:20:17.068 | 99.99th=[18220] 00:20:17.068 bw ( KiB/s): min=59072, max=72960, per=50.47%, avg=66656.00, stdev=5718.25, samples=4 00:20:17.068 iops : min= 3692, max= 4560, avg=4166.00, stdev=357.39, samples=4 00:20:17.068 write: IOPS=4827, BW=75.4MiB/s (79.1MB/s)(136MiB/1802msec); 0 zone resets 00:20:17.069 slat (usec): min=34, max=381, avg=40.35, stdev= 8.76 00:20:17.069 clat (usec): min=3148, max=22670, avg=12562.47, stdev=2653.76 00:20:17.069 lat (usec): min=3184, max=22716, avg=12602.82, stdev=2656.43 00:20:17.069 clat percentiles (usec): 00:20:17.069 | 1.00th=[ 7504], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10290], 00:20:17.069 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12256], 60.00th=[12911], 00:20:17.069 | 70.00th=[13698], 80.00th=[14877], 90.00th=[16319], 95.00th=[17171], 00:20:17.069 | 99.00th=[19530], 99.50th=[20055], 99.90th=[22414], 99.95th=[22414], 00:20:17.069 | 99.99th=[22676] 00:20:17.069 bw ( KiB/s): min=60928, max=75744, per=89.76%, avg=69336.00, stdev=6161.39, samples=4 00:20:17.069 iops : min= 3808, max= 4734, avg=4333.50, stdev=385.09, samples=4 00:20:17.069 lat (msec) : 2=0.01%, 4=0.40%, 10=52.81%, 20=46.59%, 50=0.18% 00:20:17.069 cpu : usr=79.08%, sys=14.64%, ctx=6, majf=0, minf=1 00:20:17.069 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:20:17.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:17.069 issued rwts: total=16582,8700,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:17.069 00:20:17.069 Run status group 0 (all jobs): 00:20:17.069 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (272MB), run=2009-2009msec 00:20:17.069 WRITE: bw=75.4MiB/s (79.1MB/s), 75.4MiB/s-75.4MiB/s (79.1MB/s-79.1MB/s), io=136MiB (143MB), run=1802-1802msec 00:20:17.069 12:04:22 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:17.327 12:04:22 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:20:17.327 12:04:22 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:20:17.327 12:04:22 -- host/fio.sh@51 -- # get_nvme_bdfs 00:20:17.327 12:04:22 -- common/autotest_common.sh@1508 -- # bdfs=() 00:20:17.327 12:04:22 -- common/autotest_common.sh@1508 -- # local bdfs 00:20:17.327 12:04:22 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:17.327 12:04:22 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:17.327 12:04:22 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:20:17.327 12:04:22 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:20:17.327 12:04:22 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:20:17.327 12:04:22 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:20:17.585 Nvme0n1 00:20:17.585 12:04:23 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:20:17.853 12:04:23 -- host/fio.sh@53 -- # ls_guid=bfff5448-9d2c-4edb-8385-35029613c0a6 00:20:17.853 12:04:23 -- host/fio.sh@54 -- # get_lvs_free_mb bfff5448-9d2c-4edb-8385-35029613c0a6 00:20:17.853 12:04:23 -- common/autotest_common.sh@1353 -- # local lvs_uuid=bfff5448-9d2c-4edb-8385-35029613c0a6 00:20:17.853 12:04:23 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:17.853 12:04:23 -- common/autotest_common.sh@1355 -- # local fc 00:20:17.853 12:04:23 -- common/autotest_common.sh@1356 -- # local cs 00:20:17.853 12:04:23 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:18.129 12:04:23 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:18.129 { 00:20:18.129 "uuid": "bfff5448-9d2c-4edb-8385-35029613c0a6", 00:20:18.129 "name": "lvs_0", 00:20:18.129 "base_bdev": "Nvme0n1", 00:20:18.129 "total_data_clusters": 4, 00:20:18.129 "free_clusters": 4, 00:20:18.129 "block_size": 4096, 00:20:18.129 "cluster_size": 1073741824 00:20:18.129 } 00:20:18.129 ]' 00:20:18.129 12:04:23 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="bfff5448-9d2c-4edb-8385-35029613c0a6") .free_clusters' 00:20:18.389 12:04:23 -- common/autotest_common.sh@1358 -- # fc=4 00:20:18.389 12:04:23 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="bfff5448-9d2c-4edb-8385-35029613c0a6") .cluster_size' 00:20:18.389 12:04:23 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:20:18.389 12:04:23 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:20:18.389 4096 00:20:18.389 12:04:23 -- common/autotest_common.sh@1363 -- # echo 4096 00:20:18.389 12:04:23 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:20:18.648 d4430998-fc64-42b1-aec2-06faa1f579c6 00:20:18.648 12:04:24 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:20:18.906 12:04:24 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:20:19.165 12:04:24 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:19.423 12:04:24 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:19.423 12:04:24 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:19.423 12:04:24 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:19.423 12:04:24 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:19.423 12:04:24 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:19.423 12:04:24 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:19.423 12:04:24 -- common/autotest_common.sh@1330 -- # shift 00:20:19.423 12:04:24 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:19.423 12:04:24 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.423 12:04:24 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:19.423 12:04:24 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:19.423 12:04:24 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:19.423 12:04:24 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:19.423 12:04:24 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:19.423 12:04:24 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.423 12:04:24 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:19.423 12:04:24 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:19.423 12:04:24 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:19.423 12:04:24 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:19.423 12:04:24 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:19.423 12:04:24 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:19.423 12:04:24 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:19.681 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:19.681 fio-3.35 00:20:19.681 Starting 1 thread 00:20:22.218 00:20:22.218 test: (groupid=0, jobs=1): err= 0: pid=82172: Fri Nov 29 12:04:27 2024 00:20:22.218 read: IOPS=6707, BW=26.2MiB/s (27.5MB/s)(52.6MiB/2008msec) 00:20:22.218 slat (nsec): min=1925, max=198500, avg=2426.62, stdev=2261.10 00:20:22.218 clat (usec): min=2633, max=17877, avg=9962.16, stdev=843.54 00:20:22.218 lat (usec): min=2638, max=17880, avg=9964.59, stdev=843.37 00:20:22.218 clat percentiles (usec): 00:20:22.218 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:20:22.218 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:20:22.218 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:20:22.218 | 99.00th=[11863], 99.50th=[12125], 99.90th=[16450], 99.95th=[17695], 00:20:22.218 | 99.99th=[17957] 00:20:22.218 bw ( KiB/s): min=25992, max=27264, per=99.89%, avg=26800.00, stdev=597.49, samples=4 00:20:22.218 iops : min= 6498, max= 6816, avg=6700.00, stdev=149.37, samples=4 00:20:22.218 write: IOPS=6711, BW=26.2MiB/s (27.5MB/s)(52.6MiB/2008msec); 0 zone resets 00:20:22.218 slat (usec): min=2, max=134, avg= 2.53, stdev= 1.51 00:20:22.218 clat (usec): min=1732, max=15760, avg=9028.21, stdev=781.49 00:20:22.218 lat (usec): min=1740, max=15763, avg=9030.75, stdev=781.43 00:20:22.218 clat percentiles (usec): 00:20:22.218 | 1.00th=[ 7373], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8455], 00:20:22.218 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9241], 00:20:22.218 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10159], 00:20:22.218 | 99.00th=[10683], 99.50th=[11076], 99.90th=[14746], 99.95th=[15664], 00:20:22.218 | 99.99th=[15795] 00:20:22.218 bw ( KiB/s): min=26640, max=27000, per=99.96%, avg=26834.00, stdev=167.35, samples=4 00:20:22.218 iops : min= 6660, max= 6750, avg=6708.50, stdev=41.84, samples=4 00:20:22.218 lat (msec) : 2=0.01%, 4=0.10%, 10=71.90%, 20=28.00% 00:20:22.218 cpu : usr=73.89%, sys=20.13%, ctx=5, majf=0, minf=5 00:20:22.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:22.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:22.218 issued rwts: total=13468,13476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:22.218 00:20:22.218 Run status group 0 (all jobs): 00:20:22.218 READ: bw=26.2MiB/s (27.5MB/s), 26.2MiB/s-26.2MiB/s (27.5MB/s-27.5MB/s), io=52.6MiB (55.2MB), run=2008-2008msec 00:20:22.218 WRITE: bw=26.2MiB/s (27.5MB/s), 26.2MiB/s-26.2MiB/s (27.5MB/s-27.5MB/s), io=52.6MiB (55.2MB), run=2008-2008msec 00:20:22.218 12:04:27 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:22.218 12:04:27 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:20:22.477 12:04:27 -- host/fio.sh@64 -- # ls_nested_guid=f33b6ba3-37b3-488d-be9f-4036e2829f93 00:20:22.477 12:04:27 -- host/fio.sh@65 -- # get_lvs_free_mb f33b6ba3-37b3-488d-be9f-4036e2829f93 00:20:22.477 12:04:27 -- common/autotest_common.sh@1353 -- # local lvs_uuid=f33b6ba3-37b3-488d-be9f-4036e2829f93 00:20:22.477 12:04:27 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:22.477 12:04:27 -- common/autotest_common.sh@1355 -- # local fc 00:20:22.477 12:04:27 -- common/autotest_common.sh@1356 -- # local cs 00:20:22.477 12:04:27 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:22.741 12:04:28 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:22.741 { 00:20:22.741 "uuid": "bfff5448-9d2c-4edb-8385-35029613c0a6", 00:20:22.741 "name": "lvs_0", 00:20:22.741 "base_bdev": "Nvme0n1", 00:20:22.741 "total_data_clusters": 4, 00:20:22.741 "free_clusters": 0, 00:20:22.741 "block_size": 4096, 00:20:22.741 "cluster_size": 1073741824 00:20:22.741 }, 00:20:22.741 { 00:20:22.741 "uuid": "f33b6ba3-37b3-488d-be9f-4036e2829f93", 00:20:22.741 "name": "lvs_n_0", 00:20:22.741 "base_bdev": "d4430998-fc64-42b1-aec2-06faa1f579c6", 00:20:22.741 "total_data_clusters": 1022, 00:20:22.741 "free_clusters": 1022, 00:20:22.741 "block_size": 4096, 00:20:22.741 "cluster_size": 4194304 00:20:22.741 } 00:20:22.741 ]' 00:20:22.741 12:04:28 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="f33b6ba3-37b3-488d-be9f-4036e2829f93") .free_clusters' 00:20:22.999 12:04:28 -- common/autotest_common.sh@1358 -- # fc=1022 00:20:22.999 12:04:28 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="f33b6ba3-37b3-488d-be9f-4036e2829f93") .cluster_size' 00:20:22.999 12:04:28 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:22.999 12:04:28 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:20:22.999 12:04:28 -- common/autotest_common.sh@1363 -- # echo 4088 00:20:22.999 4088 00:20:22.999 12:04:28 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:20:23.258 3d54e356-a3f6-411e-bf90-bb80bacf266f 00:20:23.258 12:04:28 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:20:23.516 12:04:28 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:20:23.774 12:04:29 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:20:24.033 12:04:29 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:24.033 12:04:29 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:24.033 12:04:29 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:24.033 12:04:29 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:24.033 12:04:29 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:24.033 12:04:29 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:24.033 12:04:29 -- common/autotest_common.sh@1330 -- # shift 00:20:24.033 12:04:29 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:24.033 12:04:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.033 12:04:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:24.033 12:04:29 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:24.033 12:04:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:24.033 12:04:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:24.033 12:04:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:24.033 12:04:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.033 12:04:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:24.033 12:04:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:24.033 12:04:29 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:24.033 12:04:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:24.033 12:04:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:24.033 12:04:29 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:24.033 12:04:29 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:24.292 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:24.292 fio-3.35 00:20:24.292 Starting 1 thread 00:20:26.826 00:20:26.826 test: (groupid=0, jobs=1): err= 0: pid=82252: Fri Nov 29 12:04:31 2024 00:20:26.826 read: IOPS=5960, BW=23.3MiB/s (24.4MB/s)(46.8MiB/2009msec) 00:20:26.826 slat (usec): min=2, max=228, avg= 2.62, stdev= 2.61 00:20:26.826 clat (usec): min=3104, max=20604, avg=11235.11, stdev=955.15 00:20:26.826 lat (usec): min=3111, max=20607, avg=11237.72, stdev=954.96 00:20:26.826 clat percentiles (usec): 00:20:26.826 | 1.00th=[ 9110], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:20:26.826 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:20:26.826 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:20:26.826 | 99.00th=[13304], 99.50th=[13829], 99.90th=[17957], 99.95th=[19530], 00:20:26.826 | 99.99th=[20579] 00:20:26.826 bw ( KiB/s): min=22960, max=24168, per=99.82%, avg=23800.00, stdev=571.35, samples=4 00:20:26.826 iops : min= 5740, max= 6042, avg=5950.00, stdev=142.84, samples=4 00:20:26.826 write: IOPS=5947, BW=23.2MiB/s (24.4MB/s)(46.7MiB/2009msec); 0 zone resets 00:20:26.826 slat (usec): min=2, max=143, avg= 2.72, stdev= 1.64 00:20:26.826 clat (usec): min=2038, max=19168, avg=10179.66, stdev=888.03 00:20:26.826 lat (usec): min=2049, max=19170, avg=10182.38, stdev=887.96 00:20:26.826 clat percentiles (usec): 00:20:26.826 | 1.00th=[ 8291], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:20:26.826 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:20:26.826 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:20:26.826 | 99.00th=[12125], 99.50th=[12387], 99.90th=[16581], 99.95th=[17695], 00:20:26.826 | 99.99th=[19268] 00:20:26.826 bw ( KiB/s): min=23680, max=23880, per=100.00%, avg=23794.00, stdev=83.23, samples=4 00:20:26.826 iops : min= 5920, max= 5970, avg=5948.50, stdev=20.81, samples=4 00:20:26.826 lat (msec) : 4=0.06%, 10=23.93%, 20=75.99%, 50=0.02% 00:20:26.826 cpu : usr=73.11%, sys=21.07%, ctx=5, majf=0, minf=5 00:20:26.826 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:20:26.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:26.826 issued rwts: total=11975,11949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.826 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:26.826 00:20:26.826 Run status group 0 (all jobs): 00:20:26.826 READ: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=46.8MiB (49.0MB), run=2009-2009msec 00:20:26.826 WRITE: bw=23.2MiB/s (24.4MB/s), 23.2MiB/s-23.2MiB/s (24.4MB/s-24.4MB/s), io=46.7MiB (48.9MB), run=2009-2009msec 00:20:26.826 12:04:31 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:26.826 12:04:32 -- host/fio.sh@74 -- # sync 00:20:26.826 12:04:32 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:20:27.085 12:04:32 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:27.344 12:04:32 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:20:27.603 12:04:33 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:27.862 12:04:33 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:20:29.239 12:04:34 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:29.239 12:04:34 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:29.239 12:04:34 -- host/fio.sh@86 -- # nvmftestfini 00:20:29.239 12:04:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:29.239 12:04:34 -- nvmf/common.sh@116 -- # sync 00:20:29.239 12:04:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:29.239 12:04:34 -- nvmf/common.sh@119 -- # set +e 00:20:29.239 12:04:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:29.239 12:04:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:29.239 rmmod nvme_tcp 00:20:29.239 rmmod nvme_fabrics 00:20:29.239 rmmod nvme_keyring 00:20:29.239 12:04:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:29.239 12:04:34 -- nvmf/common.sh@123 -- # set -e 00:20:29.239 12:04:34 -- nvmf/common.sh@124 -- # return 0 00:20:29.239 12:04:34 -- nvmf/common.sh@477 -- # '[' -n 81930 ']' 00:20:29.239 12:04:34 -- nvmf/common.sh@478 -- # killprocess 81930 00:20:29.239 12:04:34 -- common/autotest_common.sh@936 -- # '[' -z 81930 ']' 00:20:29.239 12:04:34 -- common/autotest_common.sh@940 -- # kill -0 81930 00:20:29.239 12:04:34 -- common/autotest_common.sh@941 -- # uname 00:20:29.239 12:04:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:29.239 12:04:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81930 00:20:29.239 killing process with pid 81930 00:20:29.239 12:04:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:29.239 12:04:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:29.239 12:04:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81930' 00:20:29.239 12:04:34 -- common/autotest_common.sh@955 -- # kill 81930 00:20:29.239 12:04:34 -- common/autotest_common.sh@960 -- # wait 81930 00:20:29.498 12:04:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:29.498 12:04:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:29.498 12:04:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:29.498 12:04:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.498 12:04:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:29.498 12:04:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.498 12:04:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.498 12:04:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.498 12:04:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:29.498 00:20:29.498 real 0m21.019s 00:20:29.498 user 1m32.358s 00:20:29.498 sys 0m4.570s 00:20:29.498 12:04:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:29.498 12:04:34 -- common/autotest_common.sh@10 -- # set +x 00:20:29.498 ************************************ 00:20:29.498 END TEST nvmf_fio_host 00:20:29.498 ************************************ 00:20:29.757 12:04:35 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:29.757 12:04:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:29.757 12:04:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:29.757 12:04:35 -- common/autotest_common.sh@10 -- # set +x 00:20:29.757 ************************************ 00:20:29.757 START TEST nvmf_failover 00:20:29.757 ************************************ 00:20:29.757 12:04:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:29.757 * Looking for test storage... 00:20:29.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:29.758 12:04:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:29.758 12:04:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:29.758 12:04:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:29.758 12:04:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:29.758 12:04:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:29.758 12:04:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:29.758 12:04:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:29.758 12:04:35 -- scripts/common.sh@335 -- # IFS=.-: 00:20:29.758 12:04:35 -- scripts/common.sh@335 -- # read -ra ver1 00:20:29.758 12:04:35 -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.758 12:04:35 -- scripts/common.sh@336 -- # read -ra ver2 00:20:29.758 12:04:35 -- scripts/common.sh@337 -- # local 'op=<' 00:20:29.758 12:04:35 -- scripts/common.sh@339 -- # ver1_l=2 00:20:29.758 12:04:35 -- scripts/common.sh@340 -- # ver2_l=1 00:20:29.758 12:04:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:29.758 12:04:35 -- scripts/common.sh@343 -- # case "$op" in 00:20:29.758 12:04:35 -- scripts/common.sh@344 -- # : 1 00:20:29.758 12:04:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:29.758 12:04:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.758 12:04:35 -- scripts/common.sh@364 -- # decimal 1 00:20:29.758 12:04:35 -- scripts/common.sh@352 -- # local d=1 00:20:29.758 12:04:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.758 12:04:35 -- scripts/common.sh@354 -- # echo 1 00:20:29.758 12:04:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:29.758 12:04:35 -- scripts/common.sh@365 -- # decimal 2 00:20:29.758 12:04:35 -- scripts/common.sh@352 -- # local d=2 00:20:29.758 12:04:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.758 12:04:35 -- scripts/common.sh@354 -- # echo 2 00:20:29.758 12:04:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:29.758 12:04:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:29.758 12:04:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:29.758 12:04:35 -- scripts/common.sh@367 -- # return 0 00:20:29.758 12:04:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.758 12:04:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:29.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.758 --rc genhtml_branch_coverage=1 00:20:29.758 --rc genhtml_function_coverage=1 00:20:29.758 --rc genhtml_legend=1 00:20:29.758 --rc geninfo_all_blocks=1 00:20:29.758 --rc geninfo_unexecuted_blocks=1 00:20:29.758 00:20:29.758 ' 00:20:29.758 12:04:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:29.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.758 --rc genhtml_branch_coverage=1 00:20:29.758 --rc genhtml_function_coverage=1 00:20:29.758 --rc genhtml_legend=1 00:20:29.758 --rc geninfo_all_blocks=1 00:20:29.758 --rc geninfo_unexecuted_blocks=1 00:20:29.758 00:20:29.758 ' 00:20:29.758 12:04:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:29.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.758 --rc genhtml_branch_coverage=1 00:20:29.758 --rc genhtml_function_coverage=1 00:20:29.758 --rc genhtml_legend=1 00:20:29.758 --rc geninfo_all_blocks=1 00:20:29.758 --rc geninfo_unexecuted_blocks=1 00:20:29.758 00:20:29.758 ' 00:20:29.758 12:04:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:29.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.758 --rc genhtml_branch_coverage=1 00:20:29.758 --rc genhtml_function_coverage=1 00:20:29.758 --rc genhtml_legend=1 00:20:29.758 --rc geninfo_all_blocks=1 00:20:29.758 --rc geninfo_unexecuted_blocks=1 00:20:29.758 00:20:29.758 ' 00:20:29.758 12:04:35 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:29.758 12:04:35 -- nvmf/common.sh@7 -- # uname -s 00:20:29.758 12:04:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.758 12:04:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.758 12:04:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.758 12:04:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.758 12:04:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.758 12:04:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.758 12:04:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.758 12:04:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.758 12:04:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.758 12:04:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.758 12:04:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:20:29.758 12:04:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:20:29.758 12:04:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.758 12:04:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.758 12:04:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:29.758 12:04:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:29.758 12:04:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.758 12:04:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.758 12:04:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.758 12:04:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.758 12:04:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.758 12:04:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.758 12:04:35 -- paths/export.sh@5 -- # export PATH 00:20:29.758 12:04:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.758 12:04:35 -- nvmf/common.sh@46 -- # : 0 00:20:29.758 12:04:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:29.758 12:04:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:29.758 12:04:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:29.758 12:04:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.758 12:04:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.758 12:04:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:29.758 12:04:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:29.758 12:04:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:29.758 12:04:35 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:29.758 12:04:35 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:29.759 12:04:35 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.759 12:04:35 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:29.759 12:04:35 -- host/failover.sh@18 -- # nvmftestinit 00:20:29.759 12:04:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:29.759 12:04:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.759 12:04:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:29.759 12:04:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:29.759 12:04:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:29.759 12:04:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.759 12:04:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.759 12:04:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.759 12:04:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:29.759 12:04:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:29.759 12:04:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:29.759 12:04:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:29.759 12:04:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:29.759 12:04:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:29.759 12:04:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.759 12:04:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.759 12:04:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:29.759 12:04:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:29.759 12:04:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:29.759 12:04:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:29.759 12:04:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:29.759 12:04:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.759 12:04:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:29.759 12:04:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:29.759 12:04:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:29.759 12:04:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:29.759 12:04:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:29.759 12:04:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:30.016 Cannot find device "nvmf_tgt_br" 00:20:30.016 12:04:35 -- nvmf/common.sh@154 -- # true 00:20:30.016 12:04:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:30.016 Cannot find device "nvmf_tgt_br2" 00:20:30.016 12:04:35 -- nvmf/common.sh@155 -- # true 00:20:30.016 12:04:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:30.016 12:04:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:30.016 Cannot find device "nvmf_tgt_br" 00:20:30.016 12:04:35 -- nvmf/common.sh@157 -- # true 00:20:30.016 12:04:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:30.016 Cannot find device "nvmf_tgt_br2" 00:20:30.016 12:04:35 -- nvmf/common.sh@158 -- # true 00:20:30.016 12:04:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:30.016 12:04:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:30.016 12:04:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:30.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:30.016 12:04:35 -- nvmf/common.sh@161 -- # true 00:20:30.016 12:04:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:30.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:30.016 12:04:35 -- nvmf/common.sh@162 -- # true 00:20:30.016 12:04:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:30.016 12:04:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:30.016 12:04:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:30.016 12:04:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:30.016 12:04:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:30.016 12:04:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:30.016 12:04:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:30.016 12:04:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:30.016 12:04:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:30.016 12:04:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:30.016 12:04:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:30.016 12:04:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:30.016 12:04:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:30.016 12:04:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:30.016 12:04:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:30.016 12:04:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:30.016 12:04:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:30.016 12:04:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:30.016 12:04:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:30.016 12:04:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:30.016 12:04:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:30.274 12:04:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:30.274 12:04:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:30.274 12:04:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:30.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:20:30.274 00:20:30.274 --- 10.0.0.2 ping statistics --- 00:20:30.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.274 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:20:30.274 12:04:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:30.274 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:30.274 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:20:30.274 00:20:30.274 --- 10.0.0.3 ping statistics --- 00:20:30.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.274 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:20:30.274 12:04:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:30.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:30.274 00:20:30.274 --- 10.0.0.1 ping statistics --- 00:20:30.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.274 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:30.274 12:04:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.274 12:04:35 -- nvmf/common.sh@421 -- # return 0 00:20:30.274 12:04:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:30.274 12:04:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.274 12:04:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:30.274 12:04:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:30.274 12:04:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.274 12:04:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:30.274 12:04:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:30.274 12:04:35 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:30.274 12:04:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:30.274 12:04:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:30.274 12:04:35 -- common/autotest_common.sh@10 -- # set +x 00:20:30.274 12:04:35 -- nvmf/common.sh@469 -- # nvmfpid=82513 00:20:30.274 12:04:35 -- nvmf/common.sh@470 -- # waitforlisten 82513 00:20:30.274 12:04:35 -- common/autotest_common.sh@829 -- # '[' -z 82513 ']' 00:20:30.274 12:04:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:30.274 12:04:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.274 12:04:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.274 12:04:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.274 12:04:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.274 12:04:35 -- common/autotest_common.sh@10 -- # set +x 00:20:30.274 [2024-11-29 12:04:35.634310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:30.274 [2024-11-29 12:04:35.634402] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.274 [2024-11-29 12:04:35.774859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:30.534 [2024-11-29 12:04:35.872556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:30.534 [2024-11-29 12:04:35.873078] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.534 [2024-11-29 12:04:35.873112] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.534 [2024-11-29 12:04:35.873128] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.534 [2024-11-29 12:04:35.873210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.534 [2024-11-29 12:04:35.873980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:30.534 [2024-11-29 12:04:35.873990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.469 12:04:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.469 12:04:36 -- common/autotest_common.sh@862 -- # return 0 00:20:31.469 12:04:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:31.469 12:04:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:31.469 12:04:36 -- common/autotest_common.sh@10 -- # set +x 00:20:31.469 12:04:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.469 12:04:36 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:31.469 [2024-11-29 12:04:36.932315] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.469 12:04:36 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:31.728 Malloc0 00:20:31.728 12:04:37 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:32.295 12:04:37 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:32.554 12:04:37 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:32.813 [2024-11-29 12:04:38.089563] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.813 12:04:38 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:33.072 [2024-11-29 12:04:38.361835] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:33.072 12:04:38 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:33.331 [2024-11-29 12:04:38.602076] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:33.331 12:04:38 -- host/failover.sh@31 -- # bdevperf_pid=82576 00:20:33.331 12:04:38 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:33.331 12:04:38 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:33.331 12:04:38 -- host/failover.sh@34 -- # waitforlisten 82576 /var/tmp/bdevperf.sock 00:20:33.331 12:04:38 -- common/autotest_common.sh@829 -- # '[' -z 82576 ']' 00:20:33.331 12:04:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.331 12:04:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.331 12:04:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.331 12:04:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.331 12:04:38 -- common/autotest_common.sh@10 -- # set +x 00:20:34.266 12:04:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:34.266 12:04:39 -- common/autotest_common.sh@862 -- # return 0 00:20:34.266 12:04:39 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:34.525 NVMe0n1 00:20:34.525 12:04:39 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:34.783 00:20:34.783 12:04:40 -- host/failover.sh@39 -- # run_test_pid=82594 00:20:34.783 12:04:40 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:34.783 12:04:40 -- host/failover.sh@41 -- # sleep 1 00:20:36.159 12:04:41 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.159 [2024-11-29 12:04:41.582903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.582969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.582982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.582991] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583074] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583091] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583115] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583123] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583155] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583163] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583171] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 [2024-11-29 12:04:41.583179] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7922b0 is same with the state(5) to be set 00:20:36.159 12:04:41 -- host/failover.sh@45 -- # sleep 3 00:20:39.439 12:04:44 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:39.696 00:20:39.696 12:04:45 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:39.953 [2024-11-29 12:04:45.271336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de6b0 is same with the state(5) to be set 00:20:39.953 [2024-11-29 12:04:45.271900] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de6b0 is same with the state(5) to be set 00:20:39.953 [2024-11-29 12:04:45.271986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de6b0 is same with the state(5) to be set 00:20:39.953 [2024-11-29 12:04:45.272062] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de6b0 is same with the state(5) to be set 00:20:39.953 [2024-11-29 12:04:45.272135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de6b0 is same with the state(5) to be set 00:20:39.953 [2024-11-29 12:04:45.272197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de6b0 is same with the state(5) to be set 00:20:39.953 [2024-11-29 12:04:45.272263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de6b0 is same with the state(5) to be set 00:20:39.953 [2024-11-29 12:04:45.272324] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5de6b0 is same with the state(5) to be set 00:20:39.953 12:04:45 -- host/failover.sh@50 -- # sleep 3 00:20:43.238 12:04:48 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:43.238 [2024-11-29 12:04:48.565532] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.238 12:04:48 -- host/failover.sh@55 -- # sleep 1 00:20:44.172 12:04:49 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:44.431 [2024-11-29 12:04:49.854376] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854460] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854478] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854486] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854527] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854536] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854553] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854562] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 [2024-11-29 12:04:49.854594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785b20 is same with the state(5) to be set 00:20:44.431 12:04:49 -- host/failover.sh@59 -- # wait 82594 00:20:51.016 0 00:20:51.016 12:04:55 -- host/failover.sh@61 -- # killprocess 82576 00:20:51.016 12:04:55 -- common/autotest_common.sh@936 -- # '[' -z 82576 ']' 00:20:51.016 12:04:55 -- common/autotest_common.sh@940 -- # kill -0 82576 00:20:51.016 12:04:55 -- common/autotest_common.sh@941 -- # uname 00:20:51.016 12:04:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:51.016 12:04:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82576 00:20:51.016 killing process with pid 82576 00:20:51.016 12:04:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:51.016 12:04:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:51.016 12:04:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82576' 00:20:51.016 12:04:55 -- common/autotest_common.sh@955 -- # kill 82576 00:20:51.016 12:04:55 -- common/autotest_common.sh@960 -- # wait 82576 00:20:51.016 12:04:55 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:51.016 [2024-11-29 12:04:38.662672] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:51.016 [2024-11-29 12:04:38.662777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82576 ] 00:20:51.016 [2024-11-29 12:04:38.802585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.016 [2024-11-29 12:04:38.873896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.016 Running I/O for 15 seconds... 00:20:51.017 [2024-11-29 12:04:41.583315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.583958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.583988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.584001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.584030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.584058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.584086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.584115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.017 [2024-11-29 12:04:41.584152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.584182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.017 [2024-11-29 12:04:41.584210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.584238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.017 [2024-11-29 12:04:41.584269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.017 [2024-11-29 12:04:41.584297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.584326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.584354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.584382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.584416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.017 [2024-11-29 12:04:41.584444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.017 [2024-11-29 12:04:41.584472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.017 [2024-11-29 12:04:41.584500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.017 [2024-11-29 12:04:41.584566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.017 [2024-11-29 12:04:41.584597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.017 [2024-11-29 12:04:41.584612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.584626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.584641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.584655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.584670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.584683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.584699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.584712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.584728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.584742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.584758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.584772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.584788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.584801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.584817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.584830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.584846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.584859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.584874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.584889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.584910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.584932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.584963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.584976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.584991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.585117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.585201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.585229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.585257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.585318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.585410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.585466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.585494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.585551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.585584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.018 [2024-11-29 12:04:41.585612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.018 [2024-11-29 12:04:41.585830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.018 [2024-11-29 12:04:41.585846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.585860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.585875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.585888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.585923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.585937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.585952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.585965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.585980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.585993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.586200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.586256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.586284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.586319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.586349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.586377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.586438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.586794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.586823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.586903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.586937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.586965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.586980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.586993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.587007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.019 [2024-11-29 12:04:41.587020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.587035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.587048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.019 [2024-11-29 12:04:41.587063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.019 [2024-11-29 12:04:41.587076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:41.587091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:41.587104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:41.587119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.020 [2024-11-29 12:04:41.587132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:41.587146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.020 [2024-11-29 12:04:41.587160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:41.587175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:41.587189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:41.587203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:41.587216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:41.587231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:41.587244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:41.587266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:41.587280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:41.587295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:41.587308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:41.587323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:41.587336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:41.587350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a40 is same with the state(5) to be set 00:20:51.020 [2024-11-29 12:04:41.587366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:51.020 [2024-11-29 12:04:41.587382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:51.020 [2024-11-29 12:04:41.587393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125472 len:8 PRP1 0x0 PRP2 0x0 00:20:51.020 [2024-11-29 12:04:41.587407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:41.587463] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x944a40 was disconnected and freed. reset controller. 00:20:51.020 [2024-11-29 12:04:41.587481] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:51.020 [2024-11-29 12:04:41.587571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.020 [2024-11-29 12:04:41.587594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:41.587610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.020 [2024-11-29 12:04:41.587624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:41.587639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.020 [2024-11-29 12:04:41.587656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:41.587671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.020 [2024-11-29 12:04:41.587684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:41.587698] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:51.020 [2024-11-29 12:04:41.587752] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x910d40 (9): Bad file descriptor 00:20:51.020 [2024-11-29 12:04:41.589928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:51.020 [2024-11-29 12:04:41.618682] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:51.020 [2024-11-29 12:04:45.272469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.272601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.272671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.272693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.272711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.272728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.272746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.272761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.272777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.272792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.272810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.272825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.272842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.272858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.272876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.272891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.272908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.272924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.272941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.020 [2024-11-29 12:04:45.272956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.272973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.272988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.273006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.020 [2024-11-29 12:04:45.273021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.273038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.020 [2024-11-29 12:04:45.273054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.273071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.020 [2024-11-29 12:04:45.273086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.273117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.273140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.273157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.273173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.273190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.273205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.273223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.273239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.273257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.273273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.273290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.273306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.273340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.273357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.020 [2024-11-29 12:04:45.273376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.020 [2024-11-29 12:04:45.273391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.273424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.021 [2024-11-29 12:04:45.273463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.021 [2024-11-29 12:04:45.273496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.021 [2024-11-29 12:04:45.273557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.273608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.273647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.273681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.021 [2024-11-29 12:04:45.273720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.273769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.021 [2024-11-29 12:04:45.273803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.273836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.021 [2024-11-29 12:04:45.273869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.021 [2024-11-29 12:04:45.273902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.273945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.273977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.273994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.021 [2024-11-29 12:04:45.274009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.021 [2024-11-29 12:04:45.274050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.021 [2024-11-29 12:04:45.274100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.021 [2024-11-29 12:04:45.274132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.274164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.274199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.274233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.274265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.274298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.274331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.274363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.274395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.021 [2024-11-29 12:04:45.274439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.274479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.274554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.274591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.274624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.021 [2024-11-29 12:04:45.274641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.021 [2024-11-29 12:04:45.274656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.274673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.274689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.274707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.274723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.274740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.274756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.274775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.274791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.274809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.274824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.274841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.274857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.274874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.274889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.274907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.274922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.274940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.274956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.274984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.275001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.275044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.275076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.275109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.275141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.275174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.275206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.275237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.275269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.275302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.275334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.275366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.275408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.275442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.275475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.275558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.275593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.275628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.275665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.275698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.275731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.275764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.275797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.275846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.275884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.275928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.275962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.275991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.276006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.276023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.022 [2024-11-29 12:04:45.276039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.276057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.022 [2024-11-29 12:04:45.276072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.022 [2024-11-29 12:04:45.276089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.023 [2024-11-29 12:04:45.276557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.023 [2024-11-29 12:04:45.276692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.023 [2024-11-29 12:04:45.276726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.023 [2024-11-29 12:04:45.276790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.023 [2024-11-29 12:04:45.276838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.023 [2024-11-29 12:04:45.276874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.023 [2024-11-29 12:04:45.276915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.276980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.276997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.277024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.277042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.277058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.277075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.277090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.277107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.277122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.277139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.277154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.277171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.277189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.277206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.277221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.277238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.023 [2024-11-29 12:04:45.277253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.277279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92fc00 is same with the state(5) to be set 00:20:51.023 [2024-11-29 12:04:45.277301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:51.023 [2024-11-29 12:04:45.277314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:51.023 [2024-11-29 12:04:45.277344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119216 len:8 PRP1 0x0 PRP2 0x0 00:20:51.023 [2024-11-29 12:04:45.277360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.277433] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x92fc00 was disconnected and freed. reset controller. 00:20:51.023 [2024-11-29 12:04:45.277465] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:20:51.023 [2024-11-29 12:04:45.277570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.023 [2024-11-29 12:04:45.277598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.277616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.023 [2024-11-29 12:04:45.277637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.277653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.023 [2024-11-29 12:04:45.277668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.023 [2024-11-29 12:04:45.277697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.024 [2024-11-29 12:04:45.277713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:45.277732] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:51.024 [2024-11-29 12:04:45.277810] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x910d40 (9): Bad file descriptor 00:20:51.024 [2024-11-29 12:04:45.279885] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:51.024 [2024-11-29 12:04:45.307050] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:51.024 [2024-11-29 12:04:49.854690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.854792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.854829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.854847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.854865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.854881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.854898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.854916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.854970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.854988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.024 [2024-11-29 12:04:49.855541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.024 [2024-11-29 12:04:49.855584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.024 [2024-11-29 12:04:49.855654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.024 [2024-11-29 12:04:49.855690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.024 [2024-11-29 12:04:49.855724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.024 [2024-11-29 12:04:49.855793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.855969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.855987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.856002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.856020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.856035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.856054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.856069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.856086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.024 [2024-11-29 12:04:49.856102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.856120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.024 [2024-11-29 12:04:49.856135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.856152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.024 [2024-11-29 12:04:49.856168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.024 [2024-11-29 12:04:49.856185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.025 [2024-11-29 12:04:49.856299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.025 [2024-11-29 12:04:49.856461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.025 [2024-11-29 12:04:49.856910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.856942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.025 [2024-11-29 12:04:49.856974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.856991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.025 [2024-11-29 12:04:49.857007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.857024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.025 [2024-11-29 12:04:49.857040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.857057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.025 [2024-11-29 12:04:49.857073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.857089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.025 [2024-11-29 12:04:49.857105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.857123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.025 [2024-11-29 12:04:49.857138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.857155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.857171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.857188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.857204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.857222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.857237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.857263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.857280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.857298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.857315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.857331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.857346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.857364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.857380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.025 [2024-11-29 12:04:49.857398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.025 [2024-11-29 12:04:49.857413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.026 [2024-11-29 12:04:49.857446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.857478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.857510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.857558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.857593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.857625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.026 [2024-11-29 12:04:49.857657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.026 [2024-11-29 12:04:49.857700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.026 [2024-11-29 12:04:49.857733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.026 [2024-11-29 12:04:49.857767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.026 [2024-11-29 12:04:49.857800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.857834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.857868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.857902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.857934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.857967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.857984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.857999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.026 [2024-11-29 12:04:49.858178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.026 [2024-11-29 12:04:49.858210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.026 [2024-11-29 12:04:49.858243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.026 [2024-11-29 12:04:49.858617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.026 [2024-11-29 12:04:49.858716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.026 [2024-11-29 12:04:49.858749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.026 [2024-11-29 12:04:49.858782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.026 [2024-11-29 12:04:49.858799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.027 [2024-11-29 12:04:49.858814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.858831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.027 [2024-11-29 12:04:49.858847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.858864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.027 [2024-11-29 12:04:49.858879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.858897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.027 [2024-11-29 12:04:49.858913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.858930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.027 [2024-11-29 12:04:49.858945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.858963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.027 [2024-11-29 12:04:49.858978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.858995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.027 [2024-11-29 12:04:49.859019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.859038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.027 [2024-11-29 12:04:49.859053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.859070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:51.027 [2024-11-29 12:04:49.859086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.859103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.027 [2024-11-29 12:04:49.859118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.859135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.027 [2024-11-29 12:04:49.859151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.859168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.027 [2024-11-29 12:04:49.859183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.859201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.027 [2024-11-29 12:04:49.859217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.859234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.027 [2024-11-29 12:04:49.859249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.859266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.027 [2024-11-29 12:04:49.859282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.859299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.027 [2024-11-29 12:04:49.859314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.859331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913970 is same with the state(5) to be set 00:20:51.027 [2024-11-29 12:04:49.859352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:51.027 [2024-11-29 12:04:49.859365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:51.027 [2024-11-29 12:04:49.859400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63432 len:8 PRP1 0x0 PRP2 0x0 00:20:51.027 [2024-11-29 12:04:49.859419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.859521] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x913970 was disconnected and freed. reset controller. 00:20:51.027 [2024-11-29 12:04:49.859563] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:20:51.027 [2024-11-29 12:04:49.859662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.027 [2024-11-29 12:04:49.859690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.859709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.027 [2024-11-29 12:04:49.859726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.859751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.027 [2024-11-29 12:04:49.859768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.859784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.027 [2024-11-29 12:04:49.859800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.027 [2024-11-29 12:04:49.859830] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:51.027 [2024-11-29 12:04:49.859882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x910d40 (9): Bad file descriptor 00:20:51.027 [2024-11-29 12:04:49.862321] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:51.027 [2024-11-29 12:04:49.887978] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:51.027 00:20:51.027 Latency(us) 00:20:51.027 [2024-11-29T12:04:56.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.027 [2024-11-29T12:04:56.538Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:51.027 Verification LBA range: start 0x0 length 0x4000 00:20:51.027 NVMe0n1 : 15.01 12441.71 48.60 284.44 0.00 10039.28 420.77 16562.73 00:20:51.027 [2024-11-29T12:04:56.538Z] =================================================================================================================== 00:20:51.027 [2024-11-29T12:04:56.538Z] Total : 12441.71 48.60 284.44 0.00 10039.28 420.77 16562.73 00:20:51.027 Received shutdown signal, test time was about 15.000000 seconds 00:20:51.027 00:20:51.027 Latency(us) 00:20:51.027 [2024-11-29T12:04:56.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.027 [2024-11-29T12:04:56.538Z] =================================================================================================================== 00:20:51.027 [2024-11-29T12:04:56.538Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.027 12:04:55 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:51.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.027 12:04:55 -- host/failover.sh@65 -- # count=3 00:20:51.027 12:04:55 -- host/failover.sh@67 -- # (( count != 3 )) 00:20:51.027 12:04:55 -- host/failover.sh@73 -- # bdevperf_pid=82773 00:20:51.027 12:04:55 -- host/failover.sh@75 -- # waitforlisten 82773 /var/tmp/bdevperf.sock 00:20:51.027 12:04:55 -- common/autotest_common.sh@829 -- # '[' -z 82773 ']' 00:20:51.027 12:04:55 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:51.027 12:04:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.027 12:04:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.027 12:04:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.027 12:04:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.027 12:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:51.286 12:04:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.286 12:04:56 -- common/autotest_common.sh@862 -- # return 0 00:20:51.286 12:04:56 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:51.546 [2024-11-29 12:04:56.899085] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:51.546 12:04:56 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:51.805 [2024-11-29 12:04:57.143428] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:51.805 12:04:57 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:52.063 NVMe0n1 00:20:52.063 12:04:57 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:52.321 00:20:52.321 12:04:57 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:52.887 00:20:52.887 12:04:58 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:52.887 12:04:58 -- host/failover.sh@82 -- # grep -q NVMe0 00:20:52.887 12:04:58 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:53.456 12:04:58 -- host/failover.sh@87 -- # sleep 3 00:20:56.740 12:05:01 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:56.740 12:05:01 -- host/failover.sh@88 -- # grep -q NVMe0 00:20:56.740 12:05:01 -- host/failover.sh@90 -- # run_test_pid=82856 00:20:56.740 12:05:01 -- host/failover.sh@92 -- # wait 82856 00:20:56.740 12:05:01 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:57.678 0 00:20:57.678 12:05:03 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:57.678 [2024-11-29 12:04:55.731171] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:57.678 [2024-11-29 12:04:55.731298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82773 ] 00:20:57.678 [2024-11-29 12:04:55.874579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.678 [2024-11-29 12:04:55.976915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.678 [2024-11-29 12:04:58.660301] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:57.678 [2024-11-29 12:04:58.660466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.678 [2024-11-29 12:04:58.660495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.678 [2024-11-29 12:04:58.660526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.678 [2024-11-29 12:04:58.660545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.678 [2024-11-29 12:04:58.660560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.678 [2024-11-29 12:04:58.660574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.678 [2024-11-29 12:04:58.660589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:57.678 [2024-11-29 12:04:58.660603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.678 [2024-11-29 12:04:58.660618] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:57.678 [2024-11-29 12:04:58.660678] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:57.678 [2024-11-29 12:04:58.660714] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b5d40 (9): Bad file descriptor 00:20:57.678 [2024-11-29 12:04:58.669831] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:57.678 Running I/O for 1 seconds... 00:20:57.678 00:20:57.678 Latency(us) 00:20:57.678 [2024-11-29T12:05:03.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.678 [2024-11-29T12:05:03.189Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:57.678 Verification LBA range: start 0x0 length 0x4000 00:20:57.678 NVMe0n1 : 1.01 12192.86 47.63 0.00 0.00 10438.22 1102.20 12153.95 00:20:57.678 [2024-11-29T12:05:03.189Z] =================================================================================================================== 00:20:57.678 [2024-11-29T12:05:03.189Z] Total : 12192.86 47.63 0.00 0.00 10438.22 1102.20 12153.95 00:20:57.678 12:05:03 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:57.678 12:05:03 -- host/failover.sh@95 -- # grep -q NVMe0 00:20:57.937 12:05:03 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:58.196 12:05:03 -- host/failover.sh@99 -- # grep -q NVMe0 00:20:58.196 12:05:03 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:58.454 12:05:03 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:59.021 12:05:04 -- host/failover.sh@101 -- # sleep 3 00:21:02.306 12:05:07 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:02.306 12:05:07 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:02.306 12:05:07 -- host/failover.sh@108 -- # killprocess 82773 00:21:02.306 12:05:07 -- common/autotest_common.sh@936 -- # '[' -z 82773 ']' 00:21:02.306 12:05:07 -- common/autotest_common.sh@940 -- # kill -0 82773 00:21:02.306 12:05:07 -- common/autotest_common.sh@941 -- # uname 00:21:02.306 12:05:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:02.306 12:05:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82773 00:21:02.307 killing process with pid 82773 00:21:02.307 12:05:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:02.307 12:05:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:02.307 12:05:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82773' 00:21:02.307 12:05:07 -- common/autotest_common.sh@955 -- # kill 82773 00:21:02.307 12:05:07 -- common/autotest_common.sh@960 -- # wait 82773 00:21:02.307 12:05:07 -- host/failover.sh@110 -- # sync 00:21:02.307 12:05:07 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:02.565 12:05:08 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:02.565 12:05:08 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:02.565 12:05:08 -- host/failover.sh@116 -- # nvmftestfini 00:21:02.565 12:05:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:02.565 12:05:08 -- nvmf/common.sh@116 -- # sync 00:21:02.565 12:05:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:02.565 12:05:08 -- nvmf/common.sh@119 -- # set +e 00:21:02.565 12:05:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:02.565 12:05:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:02.565 rmmod nvme_tcp 00:21:02.565 rmmod nvme_fabrics 00:21:02.824 rmmod nvme_keyring 00:21:02.824 12:05:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:02.824 12:05:08 -- nvmf/common.sh@123 -- # set -e 00:21:02.824 12:05:08 -- nvmf/common.sh@124 -- # return 0 00:21:02.824 12:05:08 -- nvmf/common.sh@477 -- # '[' -n 82513 ']' 00:21:02.824 12:05:08 -- nvmf/common.sh@478 -- # killprocess 82513 00:21:02.824 12:05:08 -- common/autotest_common.sh@936 -- # '[' -z 82513 ']' 00:21:02.824 12:05:08 -- common/autotest_common.sh@940 -- # kill -0 82513 00:21:02.824 12:05:08 -- common/autotest_common.sh@941 -- # uname 00:21:02.824 12:05:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:02.824 12:05:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82513 00:21:02.824 12:05:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:02.824 12:05:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:02.824 12:05:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82513' 00:21:02.824 killing process with pid 82513 00:21:02.824 12:05:08 -- common/autotest_common.sh@955 -- # kill 82513 00:21:02.824 12:05:08 -- common/autotest_common.sh@960 -- # wait 82513 00:21:03.142 12:05:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:03.143 12:05:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:03.143 12:05:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:03.143 12:05:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:03.143 12:05:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:03.143 12:05:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.143 12:05:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.143 12:05:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.143 12:05:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:03.143 00:21:03.143 real 0m33.500s 00:21:03.143 user 2m9.125s 00:21:03.143 sys 0m6.061s 00:21:03.143 12:05:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:03.143 12:05:08 -- common/autotest_common.sh@10 -- # set +x 00:21:03.143 ************************************ 00:21:03.143 END TEST nvmf_failover 00:21:03.143 ************************************ 00:21:03.143 12:05:08 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:03.143 12:05:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:03.143 12:05:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:03.143 12:05:08 -- common/autotest_common.sh@10 -- # set +x 00:21:03.143 ************************************ 00:21:03.143 START TEST nvmf_discovery 00:21:03.143 ************************************ 00:21:03.143 12:05:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:03.441 * Looking for test storage... 00:21:03.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:03.441 12:05:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:03.441 12:05:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:03.441 12:05:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:03.441 12:05:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:03.441 12:05:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:03.441 12:05:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:03.441 12:05:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:03.441 12:05:08 -- scripts/common.sh@335 -- # IFS=.-: 00:21:03.441 12:05:08 -- scripts/common.sh@335 -- # read -ra ver1 00:21:03.441 12:05:08 -- scripts/common.sh@336 -- # IFS=.-: 00:21:03.441 12:05:08 -- scripts/common.sh@336 -- # read -ra ver2 00:21:03.441 12:05:08 -- scripts/common.sh@337 -- # local 'op=<' 00:21:03.441 12:05:08 -- scripts/common.sh@339 -- # ver1_l=2 00:21:03.441 12:05:08 -- scripts/common.sh@340 -- # ver2_l=1 00:21:03.441 12:05:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:03.441 12:05:08 -- scripts/common.sh@343 -- # case "$op" in 00:21:03.441 12:05:08 -- scripts/common.sh@344 -- # : 1 00:21:03.441 12:05:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:03.441 12:05:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:03.441 12:05:08 -- scripts/common.sh@364 -- # decimal 1 00:21:03.441 12:05:08 -- scripts/common.sh@352 -- # local d=1 00:21:03.441 12:05:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:03.441 12:05:08 -- scripts/common.sh@354 -- # echo 1 00:21:03.441 12:05:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:03.441 12:05:08 -- scripts/common.sh@365 -- # decimal 2 00:21:03.441 12:05:08 -- scripts/common.sh@352 -- # local d=2 00:21:03.441 12:05:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:03.441 12:05:08 -- scripts/common.sh@354 -- # echo 2 00:21:03.441 12:05:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:03.441 12:05:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:03.441 12:05:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:03.441 12:05:08 -- scripts/common.sh@367 -- # return 0 00:21:03.441 12:05:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:03.441 12:05:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:03.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.441 --rc genhtml_branch_coverage=1 00:21:03.441 --rc genhtml_function_coverage=1 00:21:03.441 --rc genhtml_legend=1 00:21:03.441 --rc geninfo_all_blocks=1 00:21:03.441 --rc geninfo_unexecuted_blocks=1 00:21:03.441 00:21:03.441 ' 00:21:03.441 12:05:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:03.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.441 --rc genhtml_branch_coverage=1 00:21:03.441 --rc genhtml_function_coverage=1 00:21:03.441 --rc genhtml_legend=1 00:21:03.441 --rc geninfo_all_blocks=1 00:21:03.441 --rc geninfo_unexecuted_blocks=1 00:21:03.441 00:21:03.441 ' 00:21:03.441 12:05:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:03.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.441 --rc genhtml_branch_coverage=1 00:21:03.441 --rc genhtml_function_coverage=1 00:21:03.441 --rc genhtml_legend=1 00:21:03.441 --rc geninfo_all_blocks=1 00:21:03.441 --rc geninfo_unexecuted_blocks=1 00:21:03.441 00:21:03.441 ' 00:21:03.441 12:05:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:03.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.442 --rc genhtml_branch_coverage=1 00:21:03.442 --rc genhtml_function_coverage=1 00:21:03.442 --rc genhtml_legend=1 00:21:03.442 --rc geninfo_all_blocks=1 00:21:03.442 --rc geninfo_unexecuted_blocks=1 00:21:03.442 00:21:03.442 ' 00:21:03.442 12:05:08 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:03.442 12:05:08 -- nvmf/common.sh@7 -- # uname -s 00:21:03.442 12:05:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.442 12:05:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.442 12:05:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.442 12:05:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.442 12:05:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.442 12:05:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.442 12:05:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.442 12:05:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.442 12:05:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.442 12:05:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.442 12:05:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:21:03.442 12:05:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:21:03.442 12:05:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.442 12:05:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.442 12:05:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:03.442 12:05:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:03.442 12:05:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.442 12:05:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.442 12:05:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.442 12:05:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.442 12:05:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.442 12:05:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.442 12:05:08 -- paths/export.sh@5 -- # export PATH 00:21:03.442 12:05:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.442 12:05:08 -- nvmf/common.sh@46 -- # : 0 00:21:03.442 12:05:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:03.442 12:05:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:03.442 12:05:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:03.442 12:05:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.442 12:05:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.442 12:05:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:03.442 12:05:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:03.442 12:05:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:03.442 12:05:08 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:03.442 12:05:08 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:03.442 12:05:08 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:03.442 12:05:08 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:03.442 12:05:08 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:03.442 12:05:08 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:03.442 12:05:08 -- host/discovery.sh@25 -- # nvmftestinit 00:21:03.442 12:05:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:03.442 12:05:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.442 12:05:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:03.442 12:05:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:03.442 12:05:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:03.442 12:05:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.442 12:05:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.442 12:05:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.442 12:05:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:03.442 12:05:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:03.442 12:05:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:03.442 12:05:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:03.442 12:05:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:03.442 12:05:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:03.442 12:05:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.442 12:05:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.442 12:05:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:03.442 12:05:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:03.442 12:05:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:03.442 12:05:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:03.442 12:05:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:03.442 12:05:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.442 12:05:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:03.442 12:05:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:03.442 12:05:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:03.442 12:05:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:03.442 12:05:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:03.442 12:05:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:03.442 Cannot find device "nvmf_tgt_br" 00:21:03.442 12:05:08 -- nvmf/common.sh@154 -- # true 00:21:03.442 12:05:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:03.442 Cannot find device "nvmf_tgt_br2" 00:21:03.442 12:05:08 -- nvmf/common.sh@155 -- # true 00:21:03.442 12:05:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:03.442 12:05:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:03.442 Cannot find device "nvmf_tgt_br" 00:21:03.442 12:05:08 -- nvmf/common.sh@157 -- # true 00:21:03.442 12:05:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:03.442 Cannot find device "nvmf_tgt_br2" 00:21:03.442 12:05:08 -- nvmf/common.sh@158 -- # true 00:21:03.442 12:05:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:03.442 12:05:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:03.701 12:05:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:03.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:03.701 12:05:08 -- nvmf/common.sh@161 -- # true 00:21:03.701 12:05:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:03.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:03.701 12:05:08 -- nvmf/common.sh@162 -- # true 00:21:03.701 12:05:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:03.701 12:05:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:03.701 12:05:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:03.701 12:05:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:03.701 12:05:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:03.701 12:05:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:03.701 12:05:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:03.701 12:05:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:03.701 12:05:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:03.701 12:05:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:03.701 12:05:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:03.701 12:05:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:03.701 12:05:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:03.701 12:05:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:03.701 12:05:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:03.701 12:05:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:03.701 12:05:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:03.701 12:05:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:03.701 12:05:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:03.701 12:05:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:03.701 12:05:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:03.701 12:05:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:03.701 12:05:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:03.701 12:05:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:03.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:21:03.702 00:21:03.702 --- 10.0.0.2 ping statistics --- 00:21:03.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.702 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:21:03.702 12:05:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:03.702 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:03.702 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:21:03.702 00:21:03.702 --- 10.0.0.3 ping statistics --- 00:21:03.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.702 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:03.702 12:05:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:03.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:21:03.702 00:21:03.702 --- 10.0.0.1 ping statistics --- 00:21:03.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.702 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:03.702 12:05:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.702 12:05:09 -- nvmf/common.sh@421 -- # return 0 00:21:03.702 12:05:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:03.702 12:05:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.702 12:05:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:03.702 12:05:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:03.702 12:05:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.702 12:05:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:03.702 12:05:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:03.702 12:05:09 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:03.702 12:05:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:03.702 12:05:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:03.702 12:05:09 -- common/autotest_common.sh@10 -- # set +x 00:21:03.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.702 12:05:09 -- nvmf/common.sh@469 -- # nvmfpid=83130 00:21:03.702 12:05:09 -- nvmf/common.sh@470 -- # waitforlisten 83130 00:21:03.702 12:05:09 -- common/autotest_common.sh@829 -- # '[' -z 83130 ']' 00:21:03.702 12:05:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:03.702 12:05:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.702 12:05:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.702 12:05:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.702 12:05:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.702 12:05:09 -- common/autotest_common.sh@10 -- # set +x 00:21:03.960 [2024-11-29 12:05:09.229840] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:03.960 [2024-11-29 12:05:09.229934] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.960 [2024-11-29 12:05:09.371587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.960 [2024-11-29 12:05:09.465863] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:03.960 [2024-11-29 12:05:09.466068] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.960 [2024-11-29 12:05:09.466086] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.960 [2024-11-29 12:05:09.466098] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.960 [2024-11-29 12:05:09.466130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.898 12:05:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.898 12:05:10 -- common/autotest_common.sh@862 -- # return 0 00:21:04.898 12:05:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:04.898 12:05:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:04.898 12:05:10 -- common/autotest_common.sh@10 -- # set +x 00:21:04.898 12:05:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.898 12:05:10 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:04.898 12:05:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.898 12:05:10 -- common/autotest_common.sh@10 -- # set +x 00:21:04.898 [2024-11-29 12:05:10.281838] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.898 12:05:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.898 12:05:10 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:04.898 12:05:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.898 12:05:10 -- common/autotest_common.sh@10 -- # set +x 00:21:04.898 [2024-11-29 12:05:10.289970] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:04.898 12:05:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.898 12:05:10 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:04.898 12:05:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.898 12:05:10 -- common/autotest_common.sh@10 -- # set +x 00:21:04.898 null0 00:21:04.898 12:05:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.898 12:05:10 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:04.898 12:05:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.898 12:05:10 -- common/autotest_common.sh@10 -- # set +x 00:21:04.898 null1 00:21:04.898 12:05:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.898 12:05:10 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:04.898 12:05:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.898 12:05:10 -- common/autotest_common.sh@10 -- # set +x 00:21:04.898 12:05:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.898 12:05:10 -- host/discovery.sh@45 -- # hostpid=83162 00:21:04.898 12:05:10 -- host/discovery.sh@46 -- # waitforlisten 83162 /tmp/host.sock 00:21:04.898 12:05:10 -- common/autotest_common.sh@829 -- # '[' -z 83162 ']' 00:21:04.898 12:05:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:04.898 12:05:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.898 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:04.898 12:05:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:04.898 12:05:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.898 12:05:10 -- common/autotest_common.sh@10 -- # set +x 00:21:04.898 12:05:10 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:04.898 [2024-11-29 12:05:10.388382] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:04.898 [2024-11-29 12:05:10.388476] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83162 ] 00:21:05.171 [2024-11-29 12:05:10.523624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.171 [2024-11-29 12:05:10.619587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:05.171 [2024-11-29 12:05:10.619784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.107 12:05:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.107 12:05:11 -- common/autotest_common.sh@862 -- # return 0 00:21:06.107 12:05:11 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:06.107 12:05:11 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:06.107 12:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.107 12:05:11 -- common/autotest_common.sh@10 -- # set +x 00:21:06.107 12:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.107 12:05:11 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:06.107 12:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.107 12:05:11 -- common/autotest_common.sh@10 -- # set +x 00:21:06.107 12:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.107 12:05:11 -- host/discovery.sh@72 -- # notify_id=0 00:21:06.107 12:05:11 -- host/discovery.sh@78 -- # get_subsystem_names 00:21:06.107 12:05:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:06.107 12:05:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:06.107 12:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.107 12:05:11 -- host/discovery.sh@59 -- # sort 00:21:06.107 12:05:11 -- common/autotest_common.sh@10 -- # set +x 00:21:06.107 12:05:11 -- host/discovery.sh@59 -- # xargs 00:21:06.107 12:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.107 12:05:11 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:21:06.107 12:05:11 -- host/discovery.sh@79 -- # get_bdev_list 00:21:06.107 12:05:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:06.107 12:05:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:06.107 12:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.107 12:05:11 -- common/autotest_common.sh@10 -- # set +x 00:21:06.107 12:05:11 -- host/discovery.sh@55 -- # xargs 00:21:06.107 12:05:11 -- host/discovery.sh@55 -- # sort 00:21:06.107 12:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.107 12:05:11 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:21:06.107 12:05:11 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:06.107 12:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.107 12:05:11 -- common/autotest_common.sh@10 -- # set +x 00:21:06.107 12:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.107 12:05:11 -- host/discovery.sh@82 -- # get_subsystem_names 00:21:06.107 12:05:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:06.107 12:05:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:06.107 12:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.107 12:05:11 -- common/autotest_common.sh@10 -- # set +x 00:21:06.107 12:05:11 -- host/discovery.sh@59 -- # sort 00:21:06.107 12:05:11 -- host/discovery.sh@59 -- # xargs 00:21:06.107 12:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.107 12:05:11 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:21:06.107 12:05:11 -- host/discovery.sh@83 -- # get_bdev_list 00:21:06.107 12:05:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:06.107 12:05:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:06.107 12:05:11 -- host/discovery.sh@55 -- # sort 00:21:06.107 12:05:11 -- host/discovery.sh@55 -- # xargs 00:21:06.107 12:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.107 12:05:11 -- common/autotest_common.sh@10 -- # set +x 00:21:06.107 12:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.107 12:05:11 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:06.107 12:05:11 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:06.107 12:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.107 12:05:11 -- common/autotest_common.sh@10 -- # set +x 00:21:06.107 12:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.107 12:05:11 -- host/discovery.sh@86 -- # get_subsystem_names 00:21:06.107 12:05:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:06.107 12:05:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:06.107 12:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.107 12:05:11 -- common/autotest_common.sh@10 -- # set +x 00:21:06.107 12:05:11 -- host/discovery.sh@59 -- # sort 00:21:06.107 12:05:11 -- host/discovery.sh@59 -- # xargs 00:21:06.107 12:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.366 12:05:11 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:21:06.366 12:05:11 -- host/discovery.sh@87 -- # get_bdev_list 00:21:06.366 12:05:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:06.366 12:05:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:06.366 12:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.366 12:05:11 -- host/discovery.sh@55 -- # sort 00:21:06.366 12:05:11 -- common/autotest_common.sh@10 -- # set +x 00:21:06.366 12:05:11 -- host/discovery.sh@55 -- # xargs 00:21:06.366 12:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.366 12:05:11 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:06.366 12:05:11 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:06.366 12:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.366 12:05:11 -- common/autotest_common.sh@10 -- # set +x 00:21:06.366 [2024-11-29 12:05:11.710375] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.366 12:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.366 12:05:11 -- host/discovery.sh@92 -- # get_subsystem_names 00:21:06.366 12:05:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:06.366 12:05:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:06.366 12:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.366 12:05:11 -- host/discovery.sh@59 -- # sort 00:21:06.366 12:05:11 -- common/autotest_common.sh@10 -- # set +x 00:21:06.366 12:05:11 -- host/discovery.sh@59 -- # xargs 00:21:06.366 12:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.366 12:05:11 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:06.366 12:05:11 -- host/discovery.sh@93 -- # get_bdev_list 00:21:06.366 12:05:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:06.366 12:05:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:06.367 12:05:11 -- host/discovery.sh@55 -- # sort 00:21:06.367 12:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.367 12:05:11 -- host/discovery.sh@55 -- # xargs 00:21:06.367 12:05:11 -- common/autotest_common.sh@10 -- # set +x 00:21:06.367 12:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.367 12:05:11 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:21:06.367 12:05:11 -- host/discovery.sh@94 -- # get_notification_count 00:21:06.367 12:05:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:06.367 12:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.367 12:05:11 -- host/discovery.sh@74 -- # jq '. | length' 00:21:06.367 12:05:11 -- common/autotest_common.sh@10 -- # set +x 00:21:06.367 12:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.625 12:05:11 -- host/discovery.sh@74 -- # notification_count=0 00:21:06.625 12:05:11 -- host/discovery.sh@75 -- # notify_id=0 00:21:06.625 12:05:11 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:21:06.625 12:05:11 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:06.625 12:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.625 12:05:11 -- common/autotest_common.sh@10 -- # set +x 00:21:06.625 12:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.625 12:05:11 -- host/discovery.sh@100 -- # sleep 1 00:21:06.884 [2024-11-29 12:05:12.364996] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:06.884 [2024-11-29 12:05:12.365050] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:06.884 [2024-11-29 12:05:12.365069] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:06.884 [2024-11-29 12:05:12.371065] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:07.143 [2024-11-29 12:05:12.427369] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:07.143 [2024-11-29 12:05:12.427415] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:07.403 12:05:12 -- host/discovery.sh@101 -- # get_subsystem_names 00:21:07.403 12:05:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:07.403 12:05:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.403 12:05:12 -- common/autotest_common.sh@10 -- # set +x 00:21:07.403 12:05:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:07.403 12:05:12 -- host/discovery.sh@59 -- # sort 00:21:07.403 12:05:12 -- host/discovery.sh@59 -- # xargs 00:21:07.403 12:05:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.662 12:05:12 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.662 12:05:12 -- host/discovery.sh@102 -- # get_bdev_list 00:21:07.662 12:05:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:07.662 12:05:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.662 12:05:12 -- common/autotest_common.sh@10 -- # set +x 00:21:07.662 12:05:12 -- host/discovery.sh@55 -- # sort 00:21:07.662 12:05:12 -- host/discovery.sh@55 -- # xargs 00:21:07.662 12:05:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:07.662 12:05:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.662 12:05:13 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:07.662 12:05:13 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:21:07.662 12:05:13 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:07.662 12:05:13 -- host/discovery.sh@63 -- # sort -n 00:21:07.662 12:05:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.662 12:05:13 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:07.662 12:05:13 -- host/discovery.sh@63 -- # xargs 00:21:07.662 12:05:13 -- common/autotest_common.sh@10 -- # set +x 00:21:07.662 12:05:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.662 12:05:13 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:21:07.662 12:05:13 -- host/discovery.sh@104 -- # get_notification_count 00:21:07.662 12:05:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:07.662 12:05:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.662 12:05:13 -- common/autotest_common.sh@10 -- # set +x 00:21:07.662 12:05:13 -- host/discovery.sh@74 -- # jq '. | length' 00:21:07.662 12:05:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.662 12:05:13 -- host/discovery.sh@74 -- # notification_count=1 00:21:07.662 12:05:13 -- host/discovery.sh@75 -- # notify_id=1 00:21:07.662 12:05:13 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:21:07.662 12:05:13 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:07.662 12:05:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.662 12:05:13 -- common/autotest_common.sh@10 -- # set +x 00:21:07.662 12:05:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.662 12:05:13 -- host/discovery.sh@109 -- # sleep 1 00:21:09.041 12:05:14 -- host/discovery.sh@110 -- # get_bdev_list 00:21:09.041 12:05:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:09.041 12:05:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:09.041 12:05:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.041 12:05:14 -- host/discovery.sh@55 -- # sort 00:21:09.041 12:05:14 -- common/autotest_common.sh@10 -- # set +x 00:21:09.041 12:05:14 -- host/discovery.sh@55 -- # xargs 00:21:09.041 12:05:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.041 12:05:14 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:09.041 12:05:14 -- host/discovery.sh@111 -- # get_notification_count 00:21:09.041 12:05:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:09.041 12:05:14 -- host/discovery.sh@74 -- # jq '. | length' 00:21:09.041 12:05:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.041 12:05:14 -- common/autotest_common.sh@10 -- # set +x 00:21:09.041 12:05:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.041 12:05:14 -- host/discovery.sh@74 -- # notification_count=1 00:21:09.041 12:05:14 -- host/discovery.sh@75 -- # notify_id=2 00:21:09.041 12:05:14 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:21:09.041 12:05:14 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:09.041 12:05:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.041 12:05:14 -- common/autotest_common.sh@10 -- # set +x 00:21:09.042 [2024-11-29 12:05:14.261232] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:09.042 [2024-11-29 12:05:14.261916] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:09.042 [2024-11-29 12:05:14.261957] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:09.042 12:05:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.042 12:05:14 -- host/discovery.sh@117 -- # sleep 1 00:21:09.042 [2024-11-29 12:05:14.267906] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:09.042 [2024-11-29 12:05:14.328198] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:09.042 [2024-11-29 12:05:14.328247] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:09.042 [2024-11-29 12:05:14.328255] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:09.978 12:05:15 -- host/discovery.sh@118 -- # get_subsystem_names 00:21:09.978 12:05:15 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:09.978 12:05:15 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:09.978 12:05:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.978 12:05:15 -- host/discovery.sh@59 -- # sort 00:21:09.978 12:05:15 -- common/autotest_common.sh@10 -- # set +x 00:21:09.978 12:05:15 -- host/discovery.sh@59 -- # xargs 00:21:09.978 12:05:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.978 12:05:15 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.978 12:05:15 -- host/discovery.sh@119 -- # get_bdev_list 00:21:09.978 12:05:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:09.978 12:05:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.978 12:05:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:09.978 12:05:15 -- common/autotest_common.sh@10 -- # set +x 00:21:09.978 12:05:15 -- host/discovery.sh@55 -- # sort 00:21:09.978 12:05:15 -- host/discovery.sh@55 -- # xargs 00:21:09.978 12:05:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.978 12:05:15 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:09.978 12:05:15 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:21:09.978 12:05:15 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:09.978 12:05:15 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:09.978 12:05:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.978 12:05:15 -- common/autotest_common.sh@10 -- # set +x 00:21:09.978 12:05:15 -- host/discovery.sh@63 -- # xargs 00:21:09.978 12:05:15 -- host/discovery.sh@63 -- # sort -n 00:21:09.978 12:05:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.978 12:05:15 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:09.978 12:05:15 -- host/discovery.sh@121 -- # get_notification_count 00:21:09.978 12:05:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:09.978 12:05:15 -- host/discovery.sh@74 -- # jq '. | length' 00:21:09.978 12:05:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.978 12:05:15 -- common/autotest_common.sh@10 -- # set +x 00:21:09.978 12:05:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.978 12:05:15 -- host/discovery.sh@74 -- # notification_count=0 00:21:09.978 12:05:15 -- host/discovery.sh@75 -- # notify_id=2 00:21:09.978 12:05:15 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:21:09.978 12:05:15 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:09.978 12:05:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.978 12:05:15 -- common/autotest_common.sh@10 -- # set +x 00:21:10.238 [2024-11-29 12:05:15.492288] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:10.238 [2024-11-29 12:05:15.492456] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:10.238 [2024-11-29 12:05:15.495146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.238 [2024-11-29 12:05:15.495184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.238 [2024-11-29 12:05:15.495198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.238 [2024-11-29 12:05:15.495208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.238 [2024-11-29 12:05:15.495218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.238 [2024-11-29 12:05:15.495227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.238 [2024-11-29 12:05:15.495237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.238 [2024-11-29 12:05:15.495246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.238 [2024-11-29 12:05:15.495256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a88150 is same with the state(5) to be set 00:21:10.238 12:05:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.238 12:05:15 -- host/discovery.sh@127 -- # sleep 1 00:21:10.238 [2024-11-29 12:05:15.498327] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:10.238 [2024-11-29 12:05:15.498359] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:10.238 [2024-11-29 12:05:15.498421] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a88150 (9): Bad file descriptor 00:21:11.177 12:05:16 -- host/discovery.sh@128 -- # get_subsystem_names 00:21:11.177 12:05:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:11.177 12:05:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:11.177 12:05:16 -- host/discovery.sh@59 -- # sort 00:21:11.177 12:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.177 12:05:16 -- host/discovery.sh@59 -- # xargs 00:21:11.177 12:05:16 -- common/autotest_common.sh@10 -- # set +x 00:21:11.177 12:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.177 12:05:16 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.177 12:05:16 -- host/discovery.sh@129 -- # get_bdev_list 00:21:11.177 12:05:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:11.177 12:05:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:11.177 12:05:16 -- host/discovery.sh@55 -- # sort 00:21:11.177 12:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.177 12:05:16 -- host/discovery.sh@55 -- # xargs 00:21:11.177 12:05:16 -- common/autotest_common.sh@10 -- # set +x 00:21:11.177 12:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.177 12:05:16 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:11.177 12:05:16 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:21:11.177 12:05:16 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:11.177 12:05:16 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:11.177 12:05:16 -- host/discovery.sh@63 -- # sort -n 00:21:11.177 12:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.177 12:05:16 -- common/autotest_common.sh@10 -- # set +x 00:21:11.177 12:05:16 -- host/discovery.sh@63 -- # xargs 00:21:11.177 12:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.177 12:05:16 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:21:11.177 12:05:16 -- host/discovery.sh@131 -- # get_notification_count 00:21:11.177 12:05:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:11.177 12:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.177 12:05:16 -- host/discovery.sh@74 -- # jq '. | length' 00:21:11.177 12:05:16 -- common/autotest_common.sh@10 -- # set +x 00:21:11.177 12:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.436 12:05:16 -- host/discovery.sh@74 -- # notification_count=0 00:21:11.436 12:05:16 -- host/discovery.sh@75 -- # notify_id=2 00:21:11.436 12:05:16 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:21:11.436 12:05:16 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:11.436 12:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.436 12:05:16 -- common/autotest_common.sh@10 -- # set +x 00:21:11.436 12:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.436 12:05:16 -- host/discovery.sh@135 -- # sleep 1 00:21:12.374 12:05:17 -- host/discovery.sh@136 -- # get_subsystem_names 00:21:12.374 12:05:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:12.374 12:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.374 12:05:17 -- common/autotest_common.sh@10 -- # set +x 00:21:12.374 12:05:17 -- host/discovery.sh@59 -- # sort 00:21:12.374 12:05:17 -- host/discovery.sh@59 -- # xargs 00:21:12.374 12:05:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:12.374 12:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.374 12:05:17 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:21:12.374 12:05:17 -- host/discovery.sh@137 -- # get_bdev_list 00:21:12.374 12:05:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:12.374 12:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.374 12:05:17 -- host/discovery.sh@55 -- # sort 00:21:12.374 12:05:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:12.374 12:05:17 -- common/autotest_common.sh@10 -- # set +x 00:21:12.374 12:05:17 -- host/discovery.sh@55 -- # xargs 00:21:12.374 12:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.374 12:05:17 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:21:12.374 12:05:17 -- host/discovery.sh@138 -- # get_notification_count 00:21:12.374 12:05:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:12.374 12:05:17 -- host/discovery.sh@74 -- # jq '. | length' 00:21:12.374 12:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.374 12:05:17 -- common/autotest_common.sh@10 -- # set +x 00:21:12.374 12:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.633 12:05:17 -- host/discovery.sh@74 -- # notification_count=2 00:21:12.633 12:05:17 -- host/discovery.sh@75 -- # notify_id=4 00:21:12.633 12:05:17 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:21:12.633 12:05:17 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:12.633 12:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.633 12:05:17 -- common/autotest_common.sh@10 -- # set +x 00:21:13.570 [2024-11-29 12:05:18.927779] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:13.570 [2024-11-29 12:05:18.927824] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:13.570 [2024-11-29 12:05:18.927844] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:13.570 [2024-11-29 12:05:18.933813] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:13.570 [2024-11-29 12:05:18.993494] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:13.570 [2024-11-29 12:05:18.993572] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:13.570 12:05:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.570 12:05:18 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:13.570 12:05:18 -- common/autotest_common.sh@650 -- # local es=0 00:21:13.570 12:05:18 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:13.570 12:05:18 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:13.570 12:05:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.570 12:05:18 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:13.570 12:05:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.570 12:05:18 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:13.570 12:05:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.570 12:05:19 -- common/autotest_common.sh@10 -- # set +x 00:21:13.570 request: 00:21:13.570 { 00:21:13.570 "name": "nvme", 00:21:13.570 "trtype": "tcp", 00:21:13.570 "traddr": "10.0.0.2", 00:21:13.570 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:13.570 "adrfam": "ipv4", 00:21:13.570 "trsvcid": "8009", 00:21:13.570 "wait_for_attach": true, 00:21:13.570 "method": "bdev_nvme_start_discovery", 00:21:13.570 "req_id": 1 00:21:13.570 } 00:21:13.570 Got JSON-RPC error response 00:21:13.570 response: 00:21:13.570 { 00:21:13.570 "code": -17, 00:21:13.570 "message": "File exists" 00:21:13.570 } 00:21:13.570 12:05:19 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:13.570 12:05:19 -- common/autotest_common.sh@653 -- # es=1 00:21:13.570 12:05:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:13.570 12:05:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:13.570 12:05:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:13.570 12:05:19 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:21:13.570 12:05:19 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:13.570 12:05:19 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:13.570 12:05:19 -- host/discovery.sh@67 -- # xargs 00:21:13.570 12:05:19 -- host/discovery.sh@67 -- # sort 00:21:13.570 12:05:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.570 12:05:19 -- common/autotest_common.sh@10 -- # set +x 00:21:13.570 12:05:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.570 12:05:19 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:21:13.570 12:05:19 -- host/discovery.sh@147 -- # get_bdev_list 00:21:13.570 12:05:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:13.570 12:05:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:13.570 12:05:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.570 12:05:19 -- host/discovery.sh@55 -- # xargs 00:21:13.570 12:05:19 -- common/autotest_common.sh@10 -- # set +x 00:21:13.570 12:05:19 -- host/discovery.sh@55 -- # sort 00:21:13.830 12:05:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.830 12:05:19 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:13.830 12:05:19 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:13.830 12:05:19 -- common/autotest_common.sh@650 -- # local es=0 00:21:13.830 12:05:19 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:13.830 12:05:19 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:13.830 12:05:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.830 12:05:19 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:13.830 12:05:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.830 12:05:19 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:13.830 12:05:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.830 12:05:19 -- common/autotest_common.sh@10 -- # set +x 00:21:13.830 request: 00:21:13.830 { 00:21:13.830 "name": "nvme_second", 00:21:13.830 "trtype": "tcp", 00:21:13.830 "traddr": "10.0.0.2", 00:21:13.830 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:13.830 "adrfam": "ipv4", 00:21:13.830 "trsvcid": "8009", 00:21:13.830 "wait_for_attach": true, 00:21:13.830 "method": "bdev_nvme_start_discovery", 00:21:13.830 "req_id": 1 00:21:13.830 } 00:21:13.830 Got JSON-RPC error response 00:21:13.830 response: 00:21:13.830 { 00:21:13.830 "code": -17, 00:21:13.830 "message": "File exists" 00:21:13.830 } 00:21:13.830 12:05:19 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:13.830 12:05:19 -- common/autotest_common.sh@653 -- # es=1 00:21:13.830 12:05:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:13.830 12:05:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:13.830 12:05:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:13.830 12:05:19 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:21:13.830 12:05:19 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:13.830 12:05:19 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:13.830 12:05:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.830 12:05:19 -- host/discovery.sh@67 -- # sort 00:21:13.830 12:05:19 -- host/discovery.sh@67 -- # xargs 00:21:13.830 12:05:19 -- common/autotest_common.sh@10 -- # set +x 00:21:13.830 12:05:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.830 12:05:19 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:21:13.830 12:05:19 -- host/discovery.sh@153 -- # get_bdev_list 00:21:13.830 12:05:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:13.830 12:05:19 -- host/discovery.sh@55 -- # sort 00:21:13.830 12:05:19 -- host/discovery.sh@55 -- # xargs 00:21:13.830 12:05:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.830 12:05:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:13.830 12:05:19 -- common/autotest_common.sh@10 -- # set +x 00:21:13.830 12:05:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.830 12:05:19 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:13.830 12:05:19 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:13.830 12:05:19 -- common/autotest_common.sh@650 -- # local es=0 00:21:13.830 12:05:19 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:13.830 12:05:19 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:13.830 12:05:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.830 12:05:19 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:13.830 12:05:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.830 12:05:19 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:13.830 12:05:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.830 12:05:19 -- common/autotest_common.sh@10 -- # set +x 00:21:15.207 [2024-11-29 12:05:20.279362] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:15.207 [2024-11-29 12:05:20.279526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:15.207 [2024-11-29 12:05:20.279579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:15.207 [2024-11-29 12:05:20.279597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac9350 with addr=10.0.0.2, port=8010 00:21:15.207 [2024-11-29 12:05:20.279620] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:15.207 [2024-11-29 12:05:20.279631] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:15.207 [2024-11-29 12:05:20.279641] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:15.795 [2024-11-29 12:05:21.279349] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:15.795 [2024-11-29 12:05:21.279459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:15.795 [2024-11-29 12:05:21.279529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:15.795 [2024-11-29 12:05:21.279550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac9350 with addr=10.0.0.2, port=8010 00:21:15.795 [2024-11-29 12:05:21.279573] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:15.795 [2024-11-29 12:05:21.279584] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:15.795 [2024-11-29 12:05:21.279595] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:17.205 [2024-11-29 12:05:22.279195] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:17.205 request: 00:21:17.205 { 00:21:17.205 "name": "nvme_second", 00:21:17.205 "trtype": "tcp", 00:21:17.205 "traddr": "10.0.0.2", 00:21:17.205 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:17.205 "adrfam": "ipv4", 00:21:17.205 "trsvcid": "8010", 00:21:17.205 "attach_timeout_ms": 3000, 00:21:17.205 "method": "bdev_nvme_start_discovery", 00:21:17.205 "req_id": 1 00:21:17.205 } 00:21:17.205 Got JSON-RPC error response 00:21:17.205 response: 00:21:17.205 { 00:21:17.205 "code": -110, 00:21:17.205 "message": "Connection timed out" 00:21:17.205 } 00:21:17.205 12:05:22 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:17.205 12:05:22 -- common/autotest_common.sh@653 -- # es=1 00:21:17.205 12:05:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:17.205 12:05:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:17.205 12:05:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:17.205 12:05:22 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:21:17.205 12:05:22 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:17.205 12:05:22 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:17.205 12:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.205 12:05:22 -- host/discovery.sh@67 -- # sort 00:21:17.205 12:05:22 -- common/autotest_common.sh@10 -- # set +x 00:21:17.205 12:05:22 -- host/discovery.sh@67 -- # xargs 00:21:17.205 12:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.205 12:05:22 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:21:17.205 12:05:22 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:21:17.205 12:05:22 -- host/discovery.sh@162 -- # kill 83162 00:21:17.205 12:05:22 -- host/discovery.sh@163 -- # nvmftestfini 00:21:17.205 12:05:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:17.205 12:05:22 -- nvmf/common.sh@116 -- # sync 00:21:17.205 12:05:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:17.205 12:05:22 -- nvmf/common.sh@119 -- # set +e 00:21:17.205 12:05:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:17.205 12:05:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:17.205 rmmod nvme_tcp 00:21:17.205 rmmod nvme_fabrics 00:21:17.205 rmmod nvme_keyring 00:21:17.205 12:05:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:17.205 12:05:22 -- nvmf/common.sh@123 -- # set -e 00:21:17.205 12:05:22 -- nvmf/common.sh@124 -- # return 0 00:21:17.205 12:05:22 -- nvmf/common.sh@477 -- # '[' -n 83130 ']' 00:21:17.205 12:05:22 -- nvmf/common.sh@478 -- # killprocess 83130 00:21:17.205 12:05:22 -- common/autotest_common.sh@936 -- # '[' -z 83130 ']' 00:21:17.205 12:05:22 -- common/autotest_common.sh@940 -- # kill -0 83130 00:21:17.205 12:05:22 -- common/autotest_common.sh@941 -- # uname 00:21:17.205 12:05:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:17.205 12:05:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83130 00:21:17.205 killing process with pid 83130 00:21:17.205 12:05:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:17.205 12:05:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:17.205 12:05:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83130' 00:21:17.205 12:05:22 -- common/autotest_common.sh@955 -- # kill 83130 00:21:17.205 12:05:22 -- common/autotest_common.sh@960 -- # wait 83130 00:21:17.205 12:05:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:17.205 12:05:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:17.205 12:05:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:17.205 12:05:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:17.205 12:05:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:17.205 12:05:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.205 12:05:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.205 12:05:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.464 12:05:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:17.464 ************************************ 00:21:17.464 END TEST nvmf_discovery 00:21:17.464 ************************************ 00:21:17.464 00:21:17.464 real 0m14.144s 00:21:17.464 user 0m26.921s 00:21:17.464 sys 0m2.393s 00:21:17.464 12:05:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:17.464 12:05:22 -- common/autotest_common.sh@10 -- # set +x 00:21:17.464 12:05:22 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:17.464 12:05:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:17.464 12:05:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:17.464 12:05:22 -- common/autotest_common.sh@10 -- # set +x 00:21:17.464 ************************************ 00:21:17.464 START TEST nvmf_discovery_remove_ifc 00:21:17.464 ************************************ 00:21:17.464 12:05:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:17.464 * Looking for test storage... 00:21:17.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:17.464 12:05:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:17.464 12:05:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:17.464 12:05:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:17.464 12:05:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:17.464 12:05:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:17.464 12:05:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:17.464 12:05:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:17.464 12:05:22 -- scripts/common.sh@335 -- # IFS=.-: 00:21:17.464 12:05:22 -- scripts/common.sh@335 -- # read -ra ver1 00:21:17.464 12:05:22 -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.464 12:05:22 -- scripts/common.sh@336 -- # read -ra ver2 00:21:17.464 12:05:22 -- scripts/common.sh@337 -- # local 'op=<' 00:21:17.464 12:05:22 -- scripts/common.sh@339 -- # ver1_l=2 00:21:17.464 12:05:22 -- scripts/common.sh@340 -- # ver2_l=1 00:21:17.464 12:05:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:17.464 12:05:22 -- scripts/common.sh@343 -- # case "$op" in 00:21:17.464 12:05:22 -- scripts/common.sh@344 -- # : 1 00:21:17.464 12:05:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:17.464 12:05:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.464 12:05:22 -- scripts/common.sh@364 -- # decimal 1 00:21:17.464 12:05:22 -- scripts/common.sh@352 -- # local d=1 00:21:17.464 12:05:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.464 12:05:22 -- scripts/common.sh@354 -- # echo 1 00:21:17.464 12:05:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:17.464 12:05:22 -- scripts/common.sh@365 -- # decimal 2 00:21:17.464 12:05:22 -- scripts/common.sh@352 -- # local d=2 00:21:17.464 12:05:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.464 12:05:22 -- scripts/common.sh@354 -- # echo 2 00:21:17.464 12:05:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:17.464 12:05:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:17.464 12:05:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:17.464 12:05:22 -- scripts/common.sh@367 -- # return 0 00:21:17.464 12:05:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.464 12:05:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:17.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.464 --rc genhtml_branch_coverage=1 00:21:17.464 --rc genhtml_function_coverage=1 00:21:17.464 --rc genhtml_legend=1 00:21:17.464 --rc geninfo_all_blocks=1 00:21:17.464 --rc geninfo_unexecuted_blocks=1 00:21:17.464 00:21:17.464 ' 00:21:17.464 12:05:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:17.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.464 --rc genhtml_branch_coverage=1 00:21:17.464 --rc genhtml_function_coverage=1 00:21:17.464 --rc genhtml_legend=1 00:21:17.464 --rc geninfo_all_blocks=1 00:21:17.464 --rc geninfo_unexecuted_blocks=1 00:21:17.464 00:21:17.464 ' 00:21:17.464 12:05:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:17.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.464 --rc genhtml_branch_coverage=1 00:21:17.464 --rc genhtml_function_coverage=1 00:21:17.464 --rc genhtml_legend=1 00:21:17.464 --rc geninfo_all_blocks=1 00:21:17.464 --rc geninfo_unexecuted_blocks=1 00:21:17.464 00:21:17.464 ' 00:21:17.464 12:05:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:17.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.464 --rc genhtml_branch_coverage=1 00:21:17.464 --rc genhtml_function_coverage=1 00:21:17.465 --rc genhtml_legend=1 00:21:17.465 --rc geninfo_all_blocks=1 00:21:17.465 --rc geninfo_unexecuted_blocks=1 00:21:17.465 00:21:17.465 ' 00:21:17.465 12:05:22 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:17.465 12:05:22 -- nvmf/common.sh@7 -- # uname -s 00:21:17.723 12:05:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.723 12:05:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.723 12:05:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.723 12:05:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.723 12:05:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.723 12:05:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.723 12:05:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.723 12:05:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.723 12:05:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.723 12:05:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.723 12:05:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:21:17.723 12:05:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:21:17.723 12:05:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.724 12:05:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.724 12:05:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:17.724 12:05:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:17.724 12:05:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.724 12:05:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.724 12:05:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.724 12:05:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.724 12:05:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.724 12:05:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.724 12:05:22 -- paths/export.sh@5 -- # export PATH 00:21:17.724 12:05:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.724 12:05:22 -- nvmf/common.sh@46 -- # : 0 00:21:17.724 12:05:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:17.724 12:05:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:17.724 12:05:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:17.724 12:05:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.724 12:05:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.724 12:05:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:17.724 12:05:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:17.724 12:05:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:17.724 12:05:22 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:17.724 12:05:22 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:17.724 12:05:22 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:17.724 12:05:22 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:17.724 12:05:22 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:17.724 12:05:22 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:17.724 12:05:22 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:17.724 12:05:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:17.724 12:05:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.724 12:05:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:17.724 12:05:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:17.724 12:05:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:17.724 12:05:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.724 12:05:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.724 12:05:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.724 12:05:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:17.724 12:05:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:17.724 12:05:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:17.724 12:05:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:17.724 12:05:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:17.724 12:05:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:17.724 12:05:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.724 12:05:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.724 12:05:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:17.724 12:05:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:17.724 12:05:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:17.724 12:05:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:17.724 12:05:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:17.724 12:05:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.724 12:05:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:17.724 12:05:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:17.724 12:05:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:17.724 12:05:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:17.724 12:05:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:17.724 12:05:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:17.724 Cannot find device "nvmf_tgt_br" 00:21:17.724 12:05:23 -- nvmf/common.sh@154 -- # true 00:21:17.724 12:05:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:17.724 Cannot find device "nvmf_tgt_br2" 00:21:17.724 12:05:23 -- nvmf/common.sh@155 -- # true 00:21:17.724 12:05:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:17.724 12:05:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:17.724 Cannot find device "nvmf_tgt_br" 00:21:17.724 12:05:23 -- nvmf/common.sh@157 -- # true 00:21:17.724 12:05:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:17.724 Cannot find device "nvmf_tgt_br2" 00:21:17.724 12:05:23 -- nvmf/common.sh@158 -- # true 00:21:17.724 12:05:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:17.724 12:05:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:17.724 12:05:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:17.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:17.724 12:05:23 -- nvmf/common.sh@161 -- # true 00:21:17.724 12:05:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:17.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:17.724 12:05:23 -- nvmf/common.sh@162 -- # true 00:21:17.724 12:05:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:17.724 12:05:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:17.724 12:05:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:17.724 12:05:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:17.724 12:05:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:17.724 12:05:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:17.724 12:05:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:17.724 12:05:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:17.724 12:05:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:17.724 12:05:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:17.724 12:05:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:17.724 12:05:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:17.724 12:05:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:17.724 12:05:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:17.724 12:05:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:17.983 12:05:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:17.983 12:05:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:17.983 12:05:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:17.983 12:05:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:17.983 12:05:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:17.983 12:05:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:17.983 12:05:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:17.983 12:05:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:17.983 12:05:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:17.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:21:17.983 00:21:17.983 --- 10.0.0.2 ping statistics --- 00:21:17.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.983 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:21:17.983 12:05:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:17.983 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:17.983 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:21:17.983 00:21:17.983 --- 10.0.0.3 ping statistics --- 00:21:17.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.983 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:17.983 12:05:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:17.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:21:17.983 00:21:17.983 --- 10.0.0.1 ping statistics --- 00:21:17.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.983 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:17.983 12:05:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.983 12:05:23 -- nvmf/common.sh@421 -- # return 0 00:21:17.983 12:05:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:17.983 12:05:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.983 12:05:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:17.983 12:05:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:17.983 12:05:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.983 12:05:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:17.983 12:05:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:17.983 12:05:23 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:17.983 12:05:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:17.983 12:05:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:17.983 12:05:23 -- common/autotest_common.sh@10 -- # set +x 00:21:17.983 12:05:23 -- nvmf/common.sh@469 -- # nvmfpid=83672 00:21:17.983 12:05:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:17.983 12:05:23 -- nvmf/common.sh@470 -- # waitforlisten 83672 00:21:17.983 12:05:23 -- common/autotest_common.sh@829 -- # '[' -z 83672 ']' 00:21:17.983 12:05:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.983 12:05:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.984 12:05:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.984 12:05:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.984 12:05:23 -- common/autotest_common.sh@10 -- # set +x 00:21:17.984 [2024-11-29 12:05:23.391586] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:17.984 [2024-11-29 12:05:23.391694] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.242 [2024-11-29 12:05:23.532243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.242 [2024-11-29 12:05:23.622610] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:18.242 [2024-11-29 12:05:23.622955] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.242 [2024-11-29 12:05:23.622977] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.242 [2024-11-29 12:05:23.622987] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.242 [2024-11-29 12:05:23.623017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.176 12:05:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:19.176 12:05:24 -- common/autotest_common.sh@862 -- # return 0 00:21:19.176 12:05:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:19.176 12:05:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:19.176 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:21:19.176 12:05:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.176 12:05:24 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:19.176 12:05:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.176 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:21:19.176 [2024-11-29 12:05:24.522003] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.176 [2024-11-29 12:05:24.530155] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:19.176 null0 00:21:19.176 [2024-11-29 12:05:24.562078] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.176 12:05:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.176 12:05:24 -- host/discovery_remove_ifc.sh@59 -- # hostpid=83704 00:21:19.176 12:05:24 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:19.176 12:05:24 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83704 /tmp/host.sock 00:21:19.176 12:05:24 -- common/autotest_common.sh@829 -- # '[' -z 83704 ']' 00:21:19.176 12:05:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:19.176 12:05:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.176 12:05:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:19.176 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:19.176 12:05:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.176 12:05:24 -- common/autotest_common.sh@10 -- # set +x 00:21:19.176 [2024-11-29 12:05:24.637198] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:19.176 [2024-11-29 12:05:24.637798] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83704 ] 00:21:19.434 [2024-11-29 12:05:24.773218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.434 [2024-11-29 12:05:24.864535] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:19.434 [2024-11-29 12:05:24.864894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.370 12:05:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.370 12:05:25 -- common/autotest_common.sh@862 -- # return 0 00:21:20.370 12:05:25 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:20.370 12:05:25 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:20.370 12:05:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.370 12:05:25 -- common/autotest_common.sh@10 -- # set +x 00:21:20.370 12:05:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.370 12:05:25 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:20.370 12:05:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.370 12:05:25 -- common/autotest_common.sh@10 -- # set +x 00:21:20.370 12:05:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.370 12:05:25 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:20.370 12:05:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.370 12:05:25 -- common/autotest_common.sh@10 -- # set +x 00:21:21.307 [2024-11-29 12:05:26.776300] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:21.307 [2024-11-29 12:05:26.776364] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:21.307 [2024-11-29 12:05:26.776384] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:21.307 [2024-11-29 12:05:26.782345] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:21.567 [2024-11-29 12:05:26.839731] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:21.567 [2024-11-29 12:05:26.840143] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:21.567 [2024-11-29 12:05:26.840188] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:21.567 [2024-11-29 12:05:26.840210] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:21.567 [2024-11-29 12:05:26.840243] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:21.567 12:05:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:21.567 [2024-11-29 12:05:26.844912] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x606af0 was disconnected and freed. delete nvme_qpair. 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:21.567 12:05:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.567 12:05:26 -- common/autotest_common.sh@10 -- # set +x 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:21.567 12:05:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:21.567 12:05:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.567 12:05:26 -- common/autotest_common.sh@10 -- # set +x 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:21.567 12:05:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:21.567 12:05:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:22.573 12:05:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:22.573 12:05:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:22.573 12:05:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:22.573 12:05:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.573 12:05:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:22.573 12:05:27 -- common/autotest_common.sh@10 -- # set +x 00:21:22.573 12:05:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:22.573 12:05:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.573 12:05:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:22.573 12:05:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:23.949 12:05:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:23.949 12:05:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:23.949 12:05:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.949 12:05:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:23.949 12:05:29 -- common/autotest_common.sh@10 -- # set +x 00:21:23.949 12:05:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:23.949 12:05:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:23.949 12:05:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.949 12:05:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:23.949 12:05:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:24.885 12:05:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:24.885 12:05:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:24.885 12:05:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:24.885 12:05:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.885 12:05:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:24.885 12:05:30 -- common/autotest_common.sh@10 -- # set +x 00:21:24.885 12:05:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:24.885 12:05:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.885 12:05:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:24.885 12:05:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:25.822 12:05:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:25.822 12:05:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:25.822 12:05:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:25.822 12:05:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.822 12:05:31 -- common/autotest_common.sh@10 -- # set +x 00:21:25.822 12:05:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:25.822 12:05:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:25.822 12:05:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.822 12:05:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:25.822 12:05:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:26.758 12:05:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:26.758 12:05:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:26.758 12:05:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:26.758 12:05:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.759 12:05:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:26.759 12:05:32 -- common/autotest_common.sh@10 -- # set +x 00:21:26.759 12:05:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:26.759 12:05:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.759 [2024-11-29 12:05:32.266660] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:26.759 [2024-11-29 12:05:32.266792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.759 [2024-11-29 12:05:32.266809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.759 [2024-11-29 12:05:32.266823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.759 [2024-11-29 12:05:32.266832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.759 [2024-11-29 12:05:32.266842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.759 [2024-11-29 12:05:32.266851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.759 [2024-11-29 12:05:32.266862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.759 [2024-11-29 12:05:32.266871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.759 [2024-11-29 12:05:32.266882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.759 [2024-11-29 12:05:32.266892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.759 [2024-11-29 12:05:32.266901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cb890 is same with the state(5) to be set 00:21:27.018 [2024-11-29 12:05:32.276666] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cb890 (9): Bad file descriptor 00:21:27.018 [2024-11-29 12:05:32.286685] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:27.018 12:05:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:27.018 12:05:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:27.963 12:05:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:27.963 12:05:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:27.963 12:05:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.963 12:05:33 -- common/autotest_common.sh@10 -- # set +x 00:21:27.963 12:05:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:27.963 12:05:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:27.963 12:05:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:27.963 [2024-11-29 12:05:33.329620] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:21:28.904 [2024-11-29 12:05:34.353663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:30.279 [2024-11-29 12:05:35.377610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:30.279 [2024-11-29 12:05:35.377766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5cb890 with addr=10.0.0.2, port=4420 00:21:30.279 [2024-11-29 12:05:35.377807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cb890 is same with the state(5) to be set 00:21:30.279 [2024-11-29 12:05:35.377869] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:30.279 [2024-11-29 12:05:35.377894] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:30.279 [2024-11-29 12:05:35.377914] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:30.279 [2024-11-29 12:05:35.377935] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:21:30.279 [2024-11-29 12:05:35.378771] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cb890 (9): Bad file descriptor 00:21:30.279 [2024-11-29 12:05:35.378837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.279 [2024-11-29 12:05:35.378890] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:21:30.279 [2024-11-29 12:05:35.378971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.279 [2024-11-29 12:05:35.379001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.279 [2024-11-29 12:05:35.379029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.279 [2024-11-29 12:05:35.379052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.279 [2024-11-29 12:05:35.379075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.279 [2024-11-29 12:05:35.379096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.279 [2024-11-29 12:05:35.379118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.279 [2024-11-29 12:05:35.379139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.279 [2024-11-29 12:05:35.379162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.279 [2024-11-29 12:05:35.379182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.279 [2024-11-29 12:05:35.379203] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:30.279 [2024-11-29 12:05:35.379264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5caef0 (9): Bad file descriptor 00:21:30.279 [2024-11-29 12:05:35.380267] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:30.279 [2024-11-29 12:05:35.380320] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:21:30.279 12:05:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.279 12:05:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:30.279 12:05:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:31.215 12:05:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:31.215 12:05:36 -- common/autotest_common.sh@10 -- # set +x 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:31.215 12:05:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:31.215 12:05:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.215 12:05:36 -- common/autotest_common.sh@10 -- # set +x 00:21:31.215 12:05:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:31.215 12:05:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:32.153 [2024-11-29 12:05:37.391542] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:32.153 [2024-11-29 12:05:37.391577] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:32.153 [2024-11-29 12:05:37.391597] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:32.153 [2024-11-29 12:05:37.397577] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:21:32.153 [2024-11-29 12:05:37.453395] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:32.153 [2024-11-29 12:05:37.453690] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:32.153 [2024-11-29 12:05:37.453734] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:32.153 [2024-11-29 12:05:37.453767] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:21:32.153 [2024-11-29 12:05:37.453778] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:32.153 [2024-11-29 12:05:37.460229] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x5bae30 was disconnected and freed. delete nvme_qpair. 00:21:32.153 12:05:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:32.153 12:05:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:32.153 12:05:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:32.153 12:05:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:32.153 12:05:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.153 12:05:37 -- common/autotest_common.sh@10 -- # set +x 00:21:32.153 12:05:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:32.153 12:05:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.153 12:05:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:32.153 12:05:37 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:32.153 12:05:37 -- host/discovery_remove_ifc.sh@90 -- # killprocess 83704 00:21:32.153 12:05:37 -- common/autotest_common.sh@936 -- # '[' -z 83704 ']' 00:21:32.153 12:05:37 -- common/autotest_common.sh@940 -- # kill -0 83704 00:21:32.153 12:05:37 -- common/autotest_common.sh@941 -- # uname 00:21:32.153 12:05:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:32.153 12:05:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83704 00:21:32.153 killing process with pid 83704 00:21:32.153 12:05:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:32.153 12:05:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:32.153 12:05:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83704' 00:21:32.153 12:05:37 -- common/autotest_common.sh@955 -- # kill 83704 00:21:32.153 12:05:37 -- common/autotest_common.sh@960 -- # wait 83704 00:21:32.413 12:05:37 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:32.413 12:05:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:32.413 12:05:37 -- nvmf/common.sh@116 -- # sync 00:21:32.413 12:05:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:32.413 12:05:37 -- nvmf/common.sh@119 -- # set +e 00:21:32.413 12:05:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:32.413 12:05:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:32.413 rmmod nvme_tcp 00:21:32.413 rmmod nvme_fabrics 00:21:32.413 rmmod nvme_keyring 00:21:32.671 12:05:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:32.671 12:05:37 -- nvmf/common.sh@123 -- # set -e 00:21:32.671 12:05:37 -- nvmf/common.sh@124 -- # return 0 00:21:32.671 12:05:37 -- nvmf/common.sh@477 -- # '[' -n 83672 ']' 00:21:32.671 12:05:37 -- nvmf/common.sh@478 -- # killprocess 83672 00:21:32.671 12:05:37 -- common/autotest_common.sh@936 -- # '[' -z 83672 ']' 00:21:32.671 12:05:37 -- common/autotest_common.sh@940 -- # kill -0 83672 00:21:32.671 12:05:37 -- common/autotest_common.sh@941 -- # uname 00:21:32.671 12:05:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:32.671 12:05:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83672 00:21:32.671 killing process with pid 83672 00:21:32.671 12:05:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:32.671 12:05:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:32.671 12:05:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83672' 00:21:32.671 12:05:37 -- common/autotest_common.sh@955 -- # kill 83672 00:21:32.671 12:05:37 -- common/autotest_common.sh@960 -- # wait 83672 00:21:32.929 12:05:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:32.929 12:05:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:32.929 12:05:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:32.929 12:05:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:32.929 12:05:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:32.929 12:05:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.929 12:05:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.929 12:05:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.929 12:05:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:32.929 00:21:32.929 real 0m15.443s 00:21:32.929 user 0m24.767s 00:21:32.929 sys 0m2.601s 00:21:32.929 12:05:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:32.929 ************************************ 00:21:32.929 END TEST nvmf_discovery_remove_ifc 00:21:32.929 ************************************ 00:21:32.929 12:05:38 -- common/autotest_common.sh@10 -- # set +x 00:21:32.929 12:05:38 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:21:32.929 12:05:38 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:32.929 12:05:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:32.929 12:05:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:32.929 12:05:38 -- common/autotest_common.sh@10 -- # set +x 00:21:32.929 ************************************ 00:21:32.929 START TEST nvmf_digest 00:21:32.929 ************************************ 00:21:32.929 12:05:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:32.929 * Looking for test storage... 00:21:32.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:32.929 12:05:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:32.930 12:05:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:32.930 12:05:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:33.189 12:05:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:33.189 12:05:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:33.189 12:05:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:33.189 12:05:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:33.189 12:05:38 -- scripts/common.sh@335 -- # IFS=.-: 00:21:33.189 12:05:38 -- scripts/common.sh@335 -- # read -ra ver1 00:21:33.189 12:05:38 -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.189 12:05:38 -- scripts/common.sh@336 -- # read -ra ver2 00:21:33.189 12:05:38 -- scripts/common.sh@337 -- # local 'op=<' 00:21:33.189 12:05:38 -- scripts/common.sh@339 -- # ver1_l=2 00:21:33.189 12:05:38 -- scripts/common.sh@340 -- # ver2_l=1 00:21:33.189 12:05:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:33.189 12:05:38 -- scripts/common.sh@343 -- # case "$op" in 00:21:33.189 12:05:38 -- scripts/common.sh@344 -- # : 1 00:21:33.189 12:05:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:33.189 12:05:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.189 12:05:38 -- scripts/common.sh@364 -- # decimal 1 00:21:33.189 12:05:38 -- scripts/common.sh@352 -- # local d=1 00:21:33.189 12:05:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.189 12:05:38 -- scripts/common.sh@354 -- # echo 1 00:21:33.189 12:05:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:33.189 12:05:38 -- scripts/common.sh@365 -- # decimal 2 00:21:33.189 12:05:38 -- scripts/common.sh@352 -- # local d=2 00:21:33.189 12:05:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.189 12:05:38 -- scripts/common.sh@354 -- # echo 2 00:21:33.189 12:05:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:33.189 12:05:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:33.189 12:05:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:33.189 12:05:38 -- scripts/common.sh@367 -- # return 0 00:21:33.189 12:05:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.189 12:05:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:33.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.189 --rc genhtml_branch_coverage=1 00:21:33.189 --rc genhtml_function_coverage=1 00:21:33.189 --rc genhtml_legend=1 00:21:33.189 --rc geninfo_all_blocks=1 00:21:33.189 --rc geninfo_unexecuted_blocks=1 00:21:33.189 00:21:33.189 ' 00:21:33.189 12:05:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:33.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.189 --rc genhtml_branch_coverage=1 00:21:33.189 --rc genhtml_function_coverage=1 00:21:33.189 --rc genhtml_legend=1 00:21:33.189 --rc geninfo_all_blocks=1 00:21:33.189 --rc geninfo_unexecuted_blocks=1 00:21:33.189 00:21:33.189 ' 00:21:33.189 12:05:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:33.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.189 --rc genhtml_branch_coverage=1 00:21:33.189 --rc genhtml_function_coverage=1 00:21:33.189 --rc genhtml_legend=1 00:21:33.189 --rc geninfo_all_blocks=1 00:21:33.189 --rc geninfo_unexecuted_blocks=1 00:21:33.189 00:21:33.189 ' 00:21:33.189 12:05:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:33.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.189 --rc genhtml_branch_coverage=1 00:21:33.189 --rc genhtml_function_coverage=1 00:21:33.189 --rc genhtml_legend=1 00:21:33.189 --rc geninfo_all_blocks=1 00:21:33.189 --rc geninfo_unexecuted_blocks=1 00:21:33.189 00:21:33.189 ' 00:21:33.189 12:05:38 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:33.189 12:05:38 -- nvmf/common.sh@7 -- # uname -s 00:21:33.189 12:05:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.189 12:05:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.189 12:05:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.189 12:05:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.189 12:05:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.189 12:05:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.189 12:05:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.189 12:05:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.189 12:05:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.189 12:05:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.189 12:05:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:21:33.189 12:05:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:21:33.189 12:05:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.189 12:05:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.189 12:05:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:33.189 12:05:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:33.189 12:05:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.189 12:05:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.189 12:05:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.189 12:05:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.189 12:05:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.190 12:05:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.190 12:05:38 -- paths/export.sh@5 -- # export PATH 00:21:33.190 12:05:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.190 12:05:38 -- nvmf/common.sh@46 -- # : 0 00:21:33.190 12:05:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:33.190 12:05:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:33.190 12:05:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:33.190 12:05:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.190 12:05:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.190 12:05:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:33.190 12:05:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:33.190 12:05:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:33.190 12:05:38 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:33.190 12:05:38 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:33.190 12:05:38 -- host/digest.sh@16 -- # runtime=2 00:21:33.190 12:05:38 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:21:33.190 12:05:38 -- host/digest.sh@132 -- # nvmftestinit 00:21:33.190 12:05:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:33.190 12:05:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.190 12:05:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:33.190 12:05:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:33.190 12:05:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:33.190 12:05:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.190 12:05:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.190 12:05:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.190 12:05:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:33.190 12:05:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:33.190 12:05:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:33.190 12:05:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:33.190 12:05:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:33.190 12:05:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:33.190 12:05:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.190 12:05:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.190 12:05:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:33.190 12:05:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:33.190 12:05:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:33.190 12:05:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:33.190 12:05:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:33.190 12:05:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.190 12:05:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:33.190 12:05:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:33.190 12:05:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:33.190 12:05:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:33.190 12:05:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:33.190 12:05:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:33.190 Cannot find device "nvmf_tgt_br" 00:21:33.190 12:05:38 -- nvmf/common.sh@154 -- # true 00:21:33.190 12:05:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:33.190 Cannot find device "nvmf_tgt_br2" 00:21:33.190 12:05:38 -- nvmf/common.sh@155 -- # true 00:21:33.190 12:05:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:33.190 12:05:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:33.190 Cannot find device "nvmf_tgt_br" 00:21:33.190 12:05:38 -- nvmf/common.sh@157 -- # true 00:21:33.190 12:05:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:33.190 Cannot find device "nvmf_tgt_br2" 00:21:33.190 12:05:38 -- nvmf/common.sh@158 -- # true 00:21:33.190 12:05:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:33.190 12:05:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:33.190 12:05:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:33.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:33.190 12:05:38 -- nvmf/common.sh@161 -- # true 00:21:33.190 12:05:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:33.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:33.190 12:05:38 -- nvmf/common.sh@162 -- # true 00:21:33.190 12:05:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:33.190 12:05:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:33.190 12:05:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:33.190 12:05:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:33.190 12:05:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:33.190 12:05:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:33.190 12:05:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:33.190 12:05:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:33.493 12:05:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:33.493 12:05:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:33.493 12:05:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:33.493 12:05:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:33.493 12:05:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:33.493 12:05:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:33.493 12:05:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:33.493 12:05:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:33.493 12:05:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:33.493 12:05:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:33.493 12:05:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:33.493 12:05:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:33.493 12:05:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:33.493 12:05:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:33.493 12:05:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:33.493 12:05:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:33.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:21:33.493 00:21:33.493 --- 10.0.0.2 ping statistics --- 00:21:33.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.493 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:21:33.493 12:05:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:33.493 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:33.493 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:21:33.493 00:21:33.493 --- 10.0.0.3 ping statistics --- 00:21:33.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.493 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:33.493 12:05:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:33.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:21:33.493 00:21:33.493 --- 10.0.0.1 ping statistics --- 00:21:33.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.493 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:21:33.493 12:05:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.493 12:05:38 -- nvmf/common.sh@421 -- # return 0 00:21:33.493 12:05:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:33.493 12:05:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.493 12:05:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:33.493 12:05:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:33.493 12:05:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.493 12:05:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:33.493 12:05:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:33.493 12:05:38 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:33.493 12:05:38 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:21:33.493 12:05:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:33.493 12:05:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:33.493 12:05:38 -- common/autotest_common.sh@10 -- # set +x 00:21:33.493 ************************************ 00:21:33.493 START TEST nvmf_digest_clean 00:21:33.493 ************************************ 00:21:33.493 12:05:38 -- common/autotest_common.sh@1114 -- # run_digest 00:21:33.493 12:05:38 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:21:33.493 12:05:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:33.493 12:05:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:33.493 12:05:38 -- common/autotest_common.sh@10 -- # set +x 00:21:33.493 12:05:38 -- nvmf/common.sh@469 -- # nvmfpid=84123 00:21:33.494 12:05:38 -- nvmf/common.sh@470 -- # waitforlisten 84123 00:21:33.494 12:05:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:33.494 12:05:38 -- common/autotest_common.sh@829 -- # '[' -z 84123 ']' 00:21:33.494 12:05:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.494 12:05:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.494 12:05:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.494 12:05:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.494 12:05:38 -- common/autotest_common.sh@10 -- # set +x 00:21:33.494 [2024-11-29 12:05:38.898898] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:33.494 [2024-11-29 12:05:38.899008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.753 [2024-11-29 12:05:39.038616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.753 [2024-11-29 12:05:39.141110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:33.753 [2024-11-29 12:05:39.141295] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.753 [2024-11-29 12:05:39.141310] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.753 [2024-11-29 12:05:39.141320] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.753 [2024-11-29 12:05:39.141352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.687 12:05:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.687 12:05:39 -- common/autotest_common.sh@862 -- # return 0 00:21:34.687 12:05:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:34.687 12:05:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:34.687 12:05:39 -- common/autotest_common.sh@10 -- # set +x 00:21:34.687 12:05:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.687 12:05:39 -- host/digest.sh@120 -- # common_target_config 00:21:34.687 12:05:39 -- host/digest.sh@43 -- # rpc_cmd 00:21:34.687 12:05:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.687 12:05:39 -- common/autotest_common.sh@10 -- # set +x 00:21:34.687 null0 00:21:34.688 [2024-11-29 12:05:40.001384] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.688 [2024-11-29 12:05:40.025586] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.688 12:05:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.688 12:05:40 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:21:34.688 12:05:40 -- host/digest.sh@77 -- # local rw bs qd 00:21:34.688 12:05:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:34.688 12:05:40 -- host/digest.sh@80 -- # rw=randread 00:21:34.688 12:05:40 -- host/digest.sh@80 -- # bs=4096 00:21:34.688 12:05:40 -- host/digest.sh@80 -- # qd=128 00:21:34.688 12:05:40 -- host/digest.sh@82 -- # bperfpid=84155 00:21:34.688 12:05:40 -- host/digest.sh@83 -- # waitforlisten 84155 /var/tmp/bperf.sock 00:21:34.688 12:05:40 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:34.688 12:05:40 -- common/autotest_common.sh@829 -- # '[' -z 84155 ']' 00:21:34.688 12:05:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:34.688 12:05:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:34.688 12:05:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:34.688 12:05:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.688 12:05:40 -- common/autotest_common.sh@10 -- # set +x 00:21:34.688 [2024-11-29 12:05:40.084717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:34.688 [2024-11-29 12:05:40.084824] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84155 ] 00:21:34.946 [2024-11-29 12:05:40.226724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.946 [2024-11-29 12:05:40.329661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.881 12:05:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.881 12:05:41 -- common/autotest_common.sh@862 -- # return 0 00:21:35.881 12:05:41 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:21:35.881 12:05:41 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:21:35.881 12:05:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:36.139 12:05:41 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:36.139 12:05:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:36.398 nvme0n1 00:21:36.398 12:05:41 -- host/digest.sh@91 -- # bperf_py perform_tests 00:21:36.398 12:05:41 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:36.655 Running I/O for 2 seconds... 00:21:38.555 00:21:38.555 Latency(us) 00:21:38.555 [2024-11-29T12:05:44.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.555 [2024-11-29T12:05:44.066Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:38.555 nvme0n1 : 2.01 15132.03 59.11 0.00 0.00 8453.89 7566.43 20852.36 00:21:38.555 [2024-11-29T12:05:44.066Z] =================================================================================================================== 00:21:38.555 [2024-11-29T12:05:44.066Z] Total : 15132.03 59.11 0.00 0.00 8453.89 7566.43 20852.36 00:21:38.555 0 00:21:38.555 12:05:43 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:21:38.555 12:05:43 -- host/digest.sh@92 -- # get_accel_stats 00:21:38.555 12:05:43 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:38.555 12:05:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:38.555 12:05:43 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:38.555 | select(.opcode=="crc32c") 00:21:38.555 | "\(.module_name) \(.executed)"' 00:21:38.813 12:05:44 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:21:38.813 12:05:44 -- host/digest.sh@93 -- # exp_module=software 00:21:38.813 12:05:44 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:21:38.813 12:05:44 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:38.813 12:05:44 -- host/digest.sh@97 -- # killprocess 84155 00:21:38.813 12:05:44 -- common/autotest_common.sh@936 -- # '[' -z 84155 ']' 00:21:38.813 12:05:44 -- common/autotest_common.sh@940 -- # kill -0 84155 00:21:38.813 12:05:44 -- common/autotest_common.sh@941 -- # uname 00:21:38.813 12:05:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:38.813 12:05:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84155 00:21:38.813 12:05:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:38.813 killing process with pid 84155 00:21:38.813 Received shutdown signal, test time was about 2.000000 seconds 00:21:38.813 00:21:38.813 Latency(us) 00:21:38.813 [2024-11-29T12:05:44.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.813 [2024-11-29T12:05:44.324Z] =================================================================================================================== 00:21:38.813 [2024-11-29T12:05:44.324Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.813 12:05:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:38.813 12:05:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84155' 00:21:38.813 12:05:44 -- common/autotest_common.sh@955 -- # kill 84155 00:21:38.813 12:05:44 -- common/autotest_common.sh@960 -- # wait 84155 00:21:39.071 12:05:44 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:21:39.071 12:05:44 -- host/digest.sh@77 -- # local rw bs qd 00:21:39.071 12:05:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:39.071 12:05:44 -- host/digest.sh@80 -- # rw=randread 00:21:39.071 12:05:44 -- host/digest.sh@80 -- # bs=131072 00:21:39.071 12:05:44 -- host/digest.sh@80 -- # qd=16 00:21:39.071 12:05:44 -- host/digest.sh@82 -- # bperfpid=84215 00:21:39.071 12:05:44 -- host/digest.sh@83 -- # waitforlisten 84215 /var/tmp/bperf.sock 00:21:39.071 12:05:44 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:39.071 12:05:44 -- common/autotest_common.sh@829 -- # '[' -z 84215 ']' 00:21:39.071 12:05:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:39.071 12:05:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:39.071 12:05:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:39.071 12:05:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.071 12:05:44 -- common/autotest_common.sh@10 -- # set +x 00:21:39.071 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:39.071 Zero copy mechanism will not be used. 00:21:39.071 [2024-11-29 12:05:44.520874] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:39.071 [2024-11-29 12:05:44.520980] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84215 ] 00:21:39.329 [2024-11-29 12:05:44.655070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.329 [2024-11-29 12:05:44.750956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.290 12:05:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:40.290 12:05:45 -- common/autotest_common.sh@862 -- # return 0 00:21:40.290 12:05:45 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:21:40.290 12:05:45 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:21:40.290 12:05:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:40.548 12:05:45 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:40.548 12:05:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:40.807 nvme0n1 00:21:40.807 12:05:46 -- host/digest.sh@91 -- # bperf_py perform_tests 00:21:40.807 12:05:46 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:40.807 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:40.807 Zero copy mechanism will not be used. 00:21:40.807 Running I/O for 2 seconds... 00:21:43.340 00:21:43.340 Latency(us) 00:21:43.340 [2024-11-29T12:05:48.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.340 [2024-11-29T12:05:48.851Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:43.340 nvme0n1 : 2.00 6536.89 817.11 0.00 0.00 2444.74 2115.03 11081.54 00:21:43.340 [2024-11-29T12:05:48.851Z] =================================================================================================================== 00:21:43.340 [2024-11-29T12:05:48.851Z] Total : 6536.89 817.11 0.00 0.00 2444.74 2115.03 11081.54 00:21:43.340 0 00:21:43.340 12:05:48 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:21:43.340 12:05:48 -- host/digest.sh@92 -- # get_accel_stats 00:21:43.340 12:05:48 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:43.340 12:05:48 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:43.340 | select(.opcode=="crc32c") 00:21:43.340 | "\(.module_name) \(.executed)"' 00:21:43.340 12:05:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:43.340 12:05:48 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:21:43.340 12:05:48 -- host/digest.sh@93 -- # exp_module=software 00:21:43.340 12:05:48 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:21:43.340 12:05:48 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:43.340 12:05:48 -- host/digest.sh@97 -- # killprocess 84215 00:21:43.340 12:05:48 -- common/autotest_common.sh@936 -- # '[' -z 84215 ']' 00:21:43.340 12:05:48 -- common/autotest_common.sh@940 -- # kill -0 84215 00:21:43.340 12:05:48 -- common/autotest_common.sh@941 -- # uname 00:21:43.340 12:05:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:43.340 12:05:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84215 00:21:43.340 killing process with pid 84215 00:21:43.340 Received shutdown signal, test time was about 2.000000 seconds 00:21:43.340 00:21:43.340 Latency(us) 00:21:43.340 [2024-11-29T12:05:48.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.340 [2024-11-29T12:05:48.851Z] =================================================================================================================== 00:21:43.340 [2024-11-29T12:05:48.851Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.340 12:05:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:43.340 12:05:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:43.340 12:05:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84215' 00:21:43.340 12:05:48 -- common/autotest_common.sh@955 -- # kill 84215 00:21:43.340 12:05:48 -- common/autotest_common.sh@960 -- # wait 84215 00:21:43.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:43.599 12:05:48 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:21:43.599 12:05:48 -- host/digest.sh@77 -- # local rw bs qd 00:21:43.599 12:05:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:43.599 12:05:48 -- host/digest.sh@80 -- # rw=randwrite 00:21:43.599 12:05:48 -- host/digest.sh@80 -- # bs=4096 00:21:43.599 12:05:48 -- host/digest.sh@80 -- # qd=128 00:21:43.599 12:05:48 -- host/digest.sh@82 -- # bperfpid=84277 00:21:43.599 12:05:48 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:43.599 12:05:48 -- host/digest.sh@83 -- # waitforlisten 84277 /var/tmp/bperf.sock 00:21:43.599 12:05:48 -- common/autotest_common.sh@829 -- # '[' -z 84277 ']' 00:21:43.599 12:05:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:43.599 12:05:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.599 12:05:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:43.599 12:05:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.599 12:05:48 -- common/autotest_common.sh@10 -- # set +x 00:21:43.599 [2024-11-29 12:05:48.912150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:43.599 [2024-11-29 12:05:48.912453] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84277 ] 00:21:43.599 [2024-11-29 12:05:49.046445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.858 [2024-11-29 12:05:49.141437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.858 12:05:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.858 12:05:49 -- common/autotest_common.sh@862 -- # return 0 00:21:43.858 12:05:49 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:21:43.858 12:05:49 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:21:43.858 12:05:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:44.116 12:05:49 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.116 12:05:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.683 nvme0n1 00:21:44.683 12:05:49 -- host/digest.sh@91 -- # bperf_py perform_tests 00:21:44.683 12:05:49 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:44.683 Running I/O for 2 seconds... 00:21:46.586 00:21:46.586 Latency(us) 00:21:46.586 [2024-11-29T12:05:52.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.586 [2024-11-29T12:05:52.097Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:46.586 nvme0n1 : 2.00 15792.44 61.69 0.00 0.00 8096.87 7238.75 17277.67 00:21:46.586 [2024-11-29T12:05:52.097Z] =================================================================================================================== 00:21:46.586 [2024-11-29T12:05:52.097Z] Total : 15792.44 61.69 0.00 0.00 8096.87 7238.75 17277.67 00:21:46.586 0 00:21:46.586 12:05:52 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:21:46.586 12:05:52 -- host/digest.sh@92 -- # get_accel_stats 00:21:46.586 12:05:52 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:46.586 12:05:52 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:46.586 | select(.opcode=="crc32c") 00:21:46.586 | "\(.module_name) \(.executed)"' 00:21:46.586 12:05:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:46.844 12:05:52 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:21:46.844 12:05:52 -- host/digest.sh@93 -- # exp_module=software 00:21:46.844 12:05:52 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:21:46.844 12:05:52 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:46.844 12:05:52 -- host/digest.sh@97 -- # killprocess 84277 00:21:46.844 12:05:52 -- common/autotest_common.sh@936 -- # '[' -z 84277 ']' 00:21:46.844 12:05:52 -- common/autotest_common.sh@940 -- # kill -0 84277 00:21:46.844 12:05:52 -- common/autotest_common.sh@941 -- # uname 00:21:46.844 12:05:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:46.844 12:05:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84277 00:21:47.102 killing process with pid 84277 00:21:47.102 Received shutdown signal, test time was about 2.000000 seconds 00:21:47.102 00:21:47.103 Latency(us) 00:21:47.103 [2024-11-29T12:05:52.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.103 [2024-11-29T12:05:52.614Z] =================================================================================================================== 00:21:47.103 [2024-11-29T12:05:52.614Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:47.103 12:05:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:47.103 12:05:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:47.103 12:05:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84277' 00:21:47.103 12:05:52 -- common/autotest_common.sh@955 -- # kill 84277 00:21:47.103 12:05:52 -- common/autotest_common.sh@960 -- # wait 84277 00:21:47.103 12:05:52 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:21:47.103 12:05:52 -- host/digest.sh@77 -- # local rw bs qd 00:21:47.103 12:05:52 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:47.103 12:05:52 -- host/digest.sh@80 -- # rw=randwrite 00:21:47.103 12:05:52 -- host/digest.sh@80 -- # bs=131072 00:21:47.103 12:05:52 -- host/digest.sh@80 -- # qd=16 00:21:47.103 12:05:52 -- host/digest.sh@82 -- # bperfpid=84328 00:21:47.103 12:05:52 -- host/digest.sh@83 -- # waitforlisten 84328 /var/tmp/bperf.sock 00:21:47.103 12:05:52 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:47.103 12:05:52 -- common/autotest_common.sh@829 -- # '[' -z 84328 ']' 00:21:47.103 12:05:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:47.103 12:05:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:47.103 12:05:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:47.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:47.103 12:05:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:47.103 12:05:52 -- common/autotest_common.sh@10 -- # set +x 00:21:47.361 [2024-11-29 12:05:52.630793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:47.361 [2024-11-29 12:05:52.631212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84328 ] 00:21:47.361 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:47.361 Zero copy mechanism will not be used. 00:21:47.361 [2024-11-29 12:05:52.771130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.361 [2024-11-29 12:05:52.867060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.296 12:05:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.296 12:05:53 -- common/autotest_common.sh@862 -- # return 0 00:21:48.296 12:05:53 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:21:48.296 12:05:53 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:21:48.296 12:05:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:48.555 12:05:53 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:48.555 12:05:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:49.123 nvme0n1 00:21:49.123 12:05:54 -- host/digest.sh@91 -- # bperf_py perform_tests 00:21:49.123 12:05:54 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:49.123 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:49.123 Zero copy mechanism will not be used. 00:21:49.123 Running I/O for 2 seconds... 00:21:51.035 00:21:51.035 Latency(us) 00:21:51.035 [2024-11-29T12:05:56.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.035 [2024-11-29T12:05:56.546Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:51.035 nvme0n1 : 2.00 5880.27 735.03 0.00 0.00 2715.36 2368.23 11617.75 00:21:51.035 [2024-11-29T12:05:56.546Z] =================================================================================================================== 00:21:51.035 [2024-11-29T12:05:56.546Z] Total : 5880.27 735.03 0.00 0.00 2715.36 2368.23 11617.75 00:21:51.035 0 00:21:51.035 12:05:56 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:21:51.035 12:05:56 -- host/digest.sh@92 -- # get_accel_stats 00:21:51.035 12:05:56 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:51.035 12:05:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:51.035 12:05:56 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:51.035 | select(.opcode=="crc32c") 00:21:51.035 | "\(.module_name) \(.executed)"' 00:21:51.293 12:05:56 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:21:51.293 12:05:56 -- host/digest.sh@93 -- # exp_module=software 00:21:51.551 12:05:56 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:21:51.551 12:05:56 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:51.551 12:05:56 -- host/digest.sh@97 -- # killprocess 84328 00:21:51.551 12:05:56 -- common/autotest_common.sh@936 -- # '[' -z 84328 ']' 00:21:51.551 12:05:56 -- common/autotest_common.sh@940 -- # kill -0 84328 00:21:51.551 12:05:56 -- common/autotest_common.sh@941 -- # uname 00:21:51.551 12:05:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:51.551 12:05:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84328 00:21:51.551 killing process with pid 84328 00:21:51.551 Received shutdown signal, test time was about 2.000000 seconds 00:21:51.551 00:21:51.551 Latency(us) 00:21:51.551 [2024-11-29T12:05:57.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.551 [2024-11-29T12:05:57.062Z] =================================================================================================================== 00:21:51.551 [2024-11-29T12:05:57.062Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:51.551 12:05:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:51.551 12:05:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:51.551 12:05:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84328' 00:21:51.551 12:05:56 -- common/autotest_common.sh@955 -- # kill 84328 00:21:51.551 12:05:56 -- common/autotest_common.sh@960 -- # wait 84328 00:21:51.551 12:05:57 -- host/digest.sh@126 -- # killprocess 84123 00:21:51.551 12:05:57 -- common/autotest_common.sh@936 -- # '[' -z 84123 ']' 00:21:51.551 12:05:57 -- common/autotest_common.sh@940 -- # kill -0 84123 00:21:51.551 12:05:57 -- common/autotest_common.sh@941 -- # uname 00:21:51.551 12:05:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:51.551 12:05:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84123 00:21:51.810 killing process with pid 84123 00:21:51.810 12:05:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:51.810 12:05:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:51.810 12:05:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84123' 00:21:51.810 12:05:57 -- common/autotest_common.sh@955 -- # kill 84123 00:21:51.810 12:05:57 -- common/autotest_common.sh@960 -- # wait 84123 00:21:52.070 ************************************ 00:21:52.070 END TEST nvmf_digest_clean 00:21:52.070 ************************************ 00:21:52.070 00:21:52.070 real 0m18.533s 00:21:52.070 user 0m35.289s 00:21:52.070 sys 0m5.428s 00:21:52.070 12:05:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:52.070 12:05:57 -- common/autotest_common.sh@10 -- # set +x 00:21:52.070 12:05:57 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:21:52.070 12:05:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:52.070 12:05:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:52.070 12:05:57 -- common/autotest_common.sh@10 -- # set +x 00:21:52.070 ************************************ 00:21:52.070 START TEST nvmf_digest_error 00:21:52.070 ************************************ 00:21:52.070 12:05:57 -- common/autotest_common.sh@1114 -- # run_digest_error 00:21:52.070 12:05:57 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:21:52.070 12:05:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:52.070 12:05:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.070 12:05:57 -- common/autotest_common.sh@10 -- # set +x 00:21:52.070 12:05:57 -- nvmf/common.sh@469 -- # nvmfpid=84419 00:21:52.070 12:05:57 -- nvmf/common.sh@470 -- # waitforlisten 84419 00:21:52.070 12:05:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:52.070 12:05:57 -- common/autotest_common.sh@829 -- # '[' -z 84419 ']' 00:21:52.070 12:05:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.070 12:05:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.070 12:05:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.070 12:05:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.070 12:05:57 -- common/autotest_common.sh@10 -- # set +x 00:21:52.070 [2024-11-29 12:05:57.499740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:52.070 [2024-11-29 12:05:57.499859] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.329 [2024-11-29 12:05:57.636898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.329 [2024-11-29 12:05:57.742630] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:52.329 [2024-11-29 12:05:57.743313] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.329 [2024-11-29 12:05:57.743620] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.329 [2024-11-29 12:05:57.743873] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.329 [2024-11-29 12:05:57.743977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.987 12:05:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.987 12:05:58 -- common/autotest_common.sh@862 -- # return 0 00:21:52.987 12:05:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:52.987 12:05:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.987 12:05:58 -- common/autotest_common.sh@10 -- # set +x 00:21:53.273 12:05:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.273 12:05:58 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:53.273 12:05:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.273 12:05:58 -- common/autotest_common.sh@10 -- # set +x 00:21:53.273 [2024-11-29 12:05:58.504812] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:53.273 12:05:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.273 12:05:58 -- host/digest.sh@104 -- # common_target_config 00:21:53.273 12:05:58 -- host/digest.sh@43 -- # rpc_cmd 00:21:53.273 12:05:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.274 12:05:58 -- common/autotest_common.sh@10 -- # set +x 00:21:53.274 null0 00:21:53.274 [2024-11-29 12:05:58.646824] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.274 [2024-11-29 12:05:58.670988] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.274 12:05:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.274 12:05:58 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:21:53.274 12:05:58 -- host/digest.sh@54 -- # local rw bs qd 00:21:53.274 12:05:58 -- host/digest.sh@56 -- # rw=randread 00:21:53.274 12:05:58 -- host/digest.sh@56 -- # bs=4096 00:21:53.274 12:05:58 -- host/digest.sh@56 -- # qd=128 00:21:53.274 12:05:58 -- host/digest.sh@58 -- # bperfpid=84451 00:21:53.274 12:05:58 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:53.274 12:05:58 -- host/digest.sh@60 -- # waitforlisten 84451 /var/tmp/bperf.sock 00:21:53.274 12:05:58 -- common/autotest_common.sh@829 -- # '[' -z 84451 ']' 00:21:53.274 12:05:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:53.274 12:05:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:53.274 12:05:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:53.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:53.274 12:05:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:53.274 12:05:58 -- common/autotest_common.sh@10 -- # set +x 00:21:53.274 [2024-11-29 12:05:58.737063] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:53.274 [2024-11-29 12:05:58.737531] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84451 ] 00:21:53.532 [2024-11-29 12:05:58.877831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.532 [2024-11-29 12:05:58.982210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.470 12:05:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:54.470 12:05:59 -- common/autotest_common.sh@862 -- # return 0 00:21:54.470 12:05:59 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:54.470 12:05:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:54.729 12:06:00 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:54.729 12:06:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.729 12:06:00 -- common/autotest_common.sh@10 -- # set +x 00:21:54.729 12:06:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.729 12:06:00 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:54.729 12:06:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:54.987 nvme0n1 00:21:54.987 12:06:00 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:54.987 12:06:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.987 12:06:00 -- common/autotest_common.sh@10 -- # set +x 00:21:54.987 12:06:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.987 12:06:00 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:54.987 12:06:00 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:54.987 Running I/O for 2 seconds... 00:21:55.246 [2024-11-29 12:06:00.512716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.246 [2024-11-29 12:06:00.512801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.246 [2024-11-29 12:06:00.512818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.246 [2024-11-29 12:06:00.529947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.246 [2024-11-29 12:06:00.530026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.247 [2024-11-29 12:06:00.530043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.247 [2024-11-29 12:06:00.546991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.247 [2024-11-29 12:06:00.547317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.247 [2024-11-29 12:06:00.547341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.247 [2024-11-29 12:06:00.564994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.247 [2024-11-29 12:06:00.565066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.247 [2024-11-29 12:06:00.565092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.247 [2024-11-29 12:06:00.582249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.247 [2024-11-29 12:06:00.582548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.247 [2024-11-29 12:06:00.582570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.247 [2024-11-29 12:06:00.600261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.247 [2024-11-29 12:06:00.600342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.247 [2024-11-29 12:06:00.600358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.247 [2024-11-29 12:06:00.617570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.247 [2024-11-29 12:06:00.617860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.247 [2024-11-29 12:06:00.617882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.247 [2024-11-29 12:06:00.634741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.247 [2024-11-29 12:06:00.634806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.247 [2024-11-29 12:06:00.634821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.247 [2024-11-29 12:06:00.651769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.247 [2024-11-29 12:06:00.651841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.247 [2024-11-29 12:06:00.651858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.247 [2024-11-29 12:06:00.668743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.247 [2024-11-29 12:06:00.669043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.247 [2024-11-29 12:06:00.669065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.247 [2024-11-29 12:06:00.685433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.247 [2024-11-29 12:06:00.685501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.247 [2024-11-29 12:06:00.685530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.247 [2024-11-29 12:06:00.702054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.247 [2024-11-29 12:06:00.702130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.247 [2024-11-29 12:06:00.702145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.247 [2024-11-29 12:06:00.719414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.247 [2024-11-29 12:06:00.719486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.247 [2024-11-29 12:06:00.719502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.247 [2024-11-29 12:06:00.736151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.247 [2024-11-29 12:06:00.736224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.247 [2024-11-29 12:06:00.736240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.247 [2024-11-29 12:06:00.753288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.247 [2024-11-29 12:06:00.753354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.247 [2024-11-29 12:06:00.753371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.506 [2024-11-29 12:06:00.771253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.506 [2024-11-29 12:06:00.771579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.506 [2024-11-29 12:06:00.771601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.506 [2024-11-29 12:06:00.788614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.506 [2024-11-29 12:06:00.788679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.506 [2024-11-29 12:06:00.788696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.506 [2024-11-29 12:06:00.805690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.506 [2024-11-29 12:06:00.805760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.506 [2024-11-29 12:06:00.805776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.506 [2024-11-29 12:06:00.823004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.506 [2024-11-29 12:06:00.823285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.506 [2024-11-29 12:06:00.823306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.506 [2024-11-29 12:06:00.840611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.506 [2024-11-29 12:06:00.840686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.506 [2024-11-29 12:06:00.840710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.506 [2024-11-29 12:06:00.858300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.506 [2024-11-29 12:06:00.858635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.506 [2024-11-29 12:06:00.858657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.506 [2024-11-29 12:06:00.876598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.507 [2024-11-29 12:06:00.876679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.507 [2024-11-29 12:06:00.876697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.507 [2024-11-29 12:06:00.894214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.507 [2024-11-29 12:06:00.894561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.507 [2024-11-29 12:06:00.894582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.507 [2024-11-29 12:06:00.912404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.507 [2024-11-29 12:06:00.912477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.507 [2024-11-29 12:06:00.912494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.507 [2024-11-29 12:06:00.930357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.507 [2024-11-29 12:06:00.930498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.507 [2024-11-29 12:06:00.930532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.507 [2024-11-29 12:06:00.948484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.507 [2024-11-29 12:06:00.948580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.507 [2024-11-29 12:06:00.948599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.507 [2024-11-29 12:06:00.966314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.507 [2024-11-29 12:06:00.966664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.507 [2024-11-29 12:06:00.966687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.507 [2024-11-29 12:06:00.984443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.507 [2024-11-29 12:06:00.984542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.507 [2024-11-29 12:06:00.984560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.507 [2024-11-29 12:06:01.002232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.507 [2024-11-29 12:06:01.002555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.507 [2024-11-29 12:06:01.002578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.019556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.019624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.019640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.036442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.036532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.036549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.053124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.053199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.053216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.069813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.069891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.069907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.086495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.086578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.086594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.103348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.103425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.103440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.120041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.120112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.120128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.136749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.136821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.136837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.153444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.153530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.153547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.170053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.170131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.170147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.186858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.186936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.186953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.203852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.204175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.204196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.221031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.221114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.221130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.237863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.237941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.237958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.254418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.254746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.254767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.772 [2024-11-29 12:06:01.271219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:55.772 [2024-11-29 12:06:01.271301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.772 [2024-11-29 12:06:01.271317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.034 [2024-11-29 12:06:01.287678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.034 [2024-11-29 12:06:01.287756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.034 [2024-11-29 12:06:01.287773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.034 [2024-11-29 12:06:01.304045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.034 [2024-11-29 12:06:01.304120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.034 [2024-11-29 12:06:01.304136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.034 [2024-11-29 12:06:01.320530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.034 [2024-11-29 12:06:01.320607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.034 [2024-11-29 12:06:01.320623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.034 [2024-11-29 12:06:01.336851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.034 [2024-11-29 12:06:01.336930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.034 [2024-11-29 12:06:01.336947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.034 [2024-11-29 12:06:01.353089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.034 [2024-11-29 12:06:01.353163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.034 [2024-11-29 12:06:01.353180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.034 [2024-11-29 12:06:01.369728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.034 [2024-11-29 12:06:01.369817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.034 [2024-11-29 12:06:01.369834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.034 [2024-11-29 12:06:01.387322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.034 [2024-11-29 12:06:01.387397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.034 [2024-11-29 12:06:01.387413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.034 [2024-11-29 12:06:01.405260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.034 [2024-11-29 12:06:01.405595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.034 [2024-11-29 12:06:01.405617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.034 [2024-11-29 12:06:01.423249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.034 [2024-11-29 12:06:01.423338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.034 [2024-11-29 12:06:01.423354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.034 [2024-11-29 12:06:01.441136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.034 [2024-11-29 12:06:01.441470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.034 [2024-11-29 12:06:01.441492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.034 [2024-11-29 12:06:01.458961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.034 [2024-11-29 12:06:01.459040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.034 [2024-11-29 12:06:01.459057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.034 [2024-11-29 12:06:01.476436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.034 [2024-11-29 12:06:01.476528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.034 [2024-11-29 12:06:01.476554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.034 [2024-11-29 12:06:01.494292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.034 [2024-11-29 12:06:01.494370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.034 [2024-11-29 12:06:01.494386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.034 [2024-11-29 12:06:01.512065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.034 [2024-11-29 12:06:01.512146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.034 [2024-11-29 12:06:01.512162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.034 [2024-11-29 12:06:01.529262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.034 [2024-11-29 12:06:01.529346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.034 [2024-11-29 12:06:01.529373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.294 [2024-11-29 12:06:01.546696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.294 [2024-11-29 12:06:01.546772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.294 [2024-11-29 12:06:01.546788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.294 [2024-11-29 12:06:01.564372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.294 [2024-11-29 12:06:01.564448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.294 [2024-11-29 12:06:01.564464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.294 [2024-11-29 12:06:01.582018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.294 [2024-11-29 12:06:01.582129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.294 [2024-11-29 12:06:01.582145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.294 [2024-11-29 12:06:01.607654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.294 [2024-11-29 12:06:01.607967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.294 [2024-11-29 12:06:01.608000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.294 [2024-11-29 12:06:01.625209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.294 [2024-11-29 12:06:01.625282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.294 [2024-11-29 12:06:01.625298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.294 [2024-11-29 12:06:01.642343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.294 [2024-11-29 12:06:01.642418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.294 [2024-11-29 12:06:01.642435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.294 [2024-11-29 12:06:01.659691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.294 [2024-11-29 12:06:01.659782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.294 [2024-11-29 12:06:01.659798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.294 [2024-11-29 12:06:01.676615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.294 [2024-11-29 12:06:01.676682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.294 [2024-11-29 12:06:01.676708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.294 [2024-11-29 12:06:01.693523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.294 [2024-11-29 12:06:01.693596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.294 [2024-11-29 12:06:01.693612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.294 [2024-11-29 12:06:01.709779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.294 [2024-11-29 12:06:01.709843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.294 [2024-11-29 12:06:01.709858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.294 [2024-11-29 12:06:01.725934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.294 [2024-11-29 12:06:01.726002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.294 [2024-11-29 12:06:01.726019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.294 [2024-11-29 12:06:01.742246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.294 [2024-11-29 12:06:01.742319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.294 [2024-11-29 12:06:01.742334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.294 [2024-11-29 12:06:01.758641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.294 [2024-11-29 12:06:01.758712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.294 [2024-11-29 12:06:01.758729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.294 [2024-11-29 12:06:01.774868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.294 [2024-11-29 12:06:01.774930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.294 [2024-11-29 12:06:01.774946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.294 [2024-11-29 12:06:01.791163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.294 [2024-11-29 12:06:01.791227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.294 [2024-11-29 12:06:01.791242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:01.807567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:01.807635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:01.807651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:01.823850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:01.823921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:01.823937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:01.839819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:01.839881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:01.839896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:01.855766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:01.855829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:01.855845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:01.871713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:01.871777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:01.871793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:01.887823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:01.887887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:01.887904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:01.904078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:01.904145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:01.904162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:01.921288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:01.921362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:01.921378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:01.938326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:01.938383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:01.938399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:01.954978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:01.955032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:01.955048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:01.972009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:01.972069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:01.972084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:01.988421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:01.988650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:01.988670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:02.005525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:02.005584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:02.005599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:02.022382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:02.022443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:02.022459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:02.038731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:02.038781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:02.038797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.553 [2024-11-29 12:06:02.055044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.553 [2024-11-29 12:06:02.055102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.553 [2024-11-29 12:06:02.055118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.071724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.071776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.071791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.087836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.087890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.087905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.103762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.103819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.103834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.119662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.119720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.119735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.135683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.135745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.135760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.151858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.151933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.151949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.168060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.168129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.168145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.184060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.184126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.184141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.200110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.200174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.200189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.216138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.216226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.216243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.232326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.232395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.232410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.249825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.250137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.250159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.266852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.266925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.266942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.283039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.283105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.283120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.299083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.299148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.299164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.813 [2024-11-29 12:06:02.315121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:56.813 [2024-11-29 12:06:02.315184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.813 [2024-11-29 12:06:02.315200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.074 [2024-11-29 12:06:02.331393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:57.074 [2024-11-29 12:06:02.331472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.074 [2024-11-29 12:06:02.331488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.074 [2024-11-29 12:06:02.347551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:57.074 [2024-11-29 12:06:02.347616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.074 [2024-11-29 12:06:02.347631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.074 [2024-11-29 12:06:02.363983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:57.074 [2024-11-29 12:06:02.364053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.074 [2024-11-29 12:06:02.364069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.074 [2024-11-29 12:06:02.380460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:57.074 [2024-11-29 12:06:02.380796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.074 [2024-11-29 12:06:02.380820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.074 [2024-11-29 12:06:02.397202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:57.074 [2024-11-29 12:06:02.397269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.074 [2024-11-29 12:06:02.397285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.074 [2024-11-29 12:06:02.413663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:57.074 [2024-11-29 12:06:02.413745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.074 [2024-11-29 12:06:02.413761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.074 [2024-11-29 12:06:02.430657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:57.074 [2024-11-29 12:06:02.430731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.074 [2024-11-29 12:06:02.430746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.074 [2024-11-29 12:06:02.448496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:57.074 [2024-11-29 12:06:02.448580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.074 [2024-11-29 12:06:02.448597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.074 [2024-11-29 12:06:02.466028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:57.074 [2024-11-29 12:06:02.466099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.074 [2024-11-29 12:06:02.466115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.074 [2024-11-29 12:06:02.482959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x976410) 00:21:57.074 [2024-11-29 12:06:02.483030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.074 [2024-11-29 12:06:02.483046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.074 00:21:57.074 Latency(us) 00:21:57.074 [2024-11-29T12:06:02.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.074 [2024-11-29T12:06:02.585Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:57.074 nvme0n1 : 2.01 14906.65 58.23 0.00 0.00 8579.52 7804.74 34078.72 00:21:57.074 [2024-11-29T12:06:02.585Z] =================================================================================================================== 00:21:57.074 [2024-11-29T12:06:02.585Z] Total : 14906.65 58.23 0.00 0.00 8579.52 7804.74 34078.72 00:21:57.074 0 00:21:57.074 12:06:02 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:57.074 12:06:02 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:57.074 | .driver_specific 00:21:57.074 | .nvme_error 00:21:57.074 | .status_code 00:21:57.074 | .command_transient_transport_error' 00:21:57.074 12:06:02 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:57.074 12:06:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:57.334 12:06:02 -- host/digest.sh@71 -- # (( 117 > 0 )) 00:21:57.334 12:06:02 -- host/digest.sh@73 -- # killprocess 84451 00:21:57.334 12:06:02 -- common/autotest_common.sh@936 -- # '[' -z 84451 ']' 00:21:57.334 12:06:02 -- common/autotest_common.sh@940 -- # kill -0 84451 00:21:57.334 12:06:02 -- common/autotest_common.sh@941 -- # uname 00:21:57.334 12:06:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:57.334 12:06:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84451 00:21:57.594 killing process with pid 84451 00:21:57.594 Received shutdown signal, test time was about 2.000000 seconds 00:21:57.594 00:21:57.594 Latency(us) 00:21:57.594 [2024-11-29T12:06:03.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.594 [2024-11-29T12:06:03.105Z] =================================================================================================================== 00:21:57.594 [2024-11-29T12:06:03.105Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:57.594 12:06:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:57.594 12:06:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:57.594 12:06:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84451' 00:21:57.594 12:06:02 -- common/autotest_common.sh@955 -- # kill 84451 00:21:57.594 12:06:02 -- common/autotest_common.sh@960 -- # wait 84451 00:21:57.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:57.594 12:06:03 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:21:57.594 12:06:03 -- host/digest.sh@54 -- # local rw bs qd 00:21:57.594 12:06:03 -- host/digest.sh@56 -- # rw=randread 00:21:57.594 12:06:03 -- host/digest.sh@56 -- # bs=131072 00:21:57.594 12:06:03 -- host/digest.sh@56 -- # qd=16 00:21:57.594 12:06:03 -- host/digest.sh@58 -- # bperfpid=84513 00:21:57.594 12:06:03 -- host/digest.sh@60 -- # waitforlisten 84513 /var/tmp/bperf.sock 00:21:57.594 12:06:03 -- common/autotest_common.sh@829 -- # '[' -z 84513 ']' 00:21:57.594 12:06:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:57.594 12:06:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.594 12:06:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:57.594 12:06:03 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:57.594 12:06:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.594 12:06:03 -- common/autotest_common.sh@10 -- # set +x 00:21:57.854 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:57.854 Zero copy mechanism will not be used. 00:21:57.854 [2024-11-29 12:06:03.115350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:57.854 [2024-11-29 12:06:03.115464] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84513 ] 00:21:57.854 [2024-11-29 12:06:03.260524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.854 [2024-11-29 12:06:03.354945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.793 12:06:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.793 12:06:04 -- common/autotest_common.sh@862 -- # return 0 00:21:58.793 12:06:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:58.793 12:06:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:59.053 12:06:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:59.053 12:06:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.053 12:06:04 -- common/autotest_common.sh@10 -- # set +x 00:21:59.053 12:06:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.053 12:06:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:59.053 12:06:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:59.323 nvme0n1 00:21:59.323 12:06:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:59.323 12:06:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.323 12:06:04 -- common/autotest_common.sh@10 -- # set +x 00:21:59.323 12:06:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.323 12:06:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:59.323 12:06:04 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:59.598 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:59.598 Zero copy mechanism will not be used. 00:21:59.598 Running I/O for 2 seconds... 00:21:59.598 [2024-11-29 12:06:04.903215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.598 [2024-11-29 12:06:04.903580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.598 [2024-11-29 12:06:04.903602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.598 [2024-11-29 12:06:04.908591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.598 [2024-11-29 12:06:04.908641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.598 [2024-11-29 12:06:04.908657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.598 [2024-11-29 12:06:04.913550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.598 [2024-11-29 12:06:04.913593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.598 [2024-11-29 12:06:04.913607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.598 [2024-11-29 12:06:04.918419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.598 [2024-11-29 12:06:04.918467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.598 [2024-11-29 12:06:04.918482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.598 [2024-11-29 12:06:04.923436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.598 [2024-11-29 12:06:04.923688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.598 [2024-11-29 12:06:04.923709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.598 [2024-11-29 12:06:04.928807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.598 [2024-11-29 12:06:04.928862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.598 [2024-11-29 12:06:04.928878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.598 [2024-11-29 12:06:04.933848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.598 [2024-11-29 12:06:04.933898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.598 [2024-11-29 12:06:04.933913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.598 [2024-11-29 12:06:04.938753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.598 [2024-11-29 12:06:04.938977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.598 [2024-11-29 12:06:04.938997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.598 [2024-11-29 12:06:04.943898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.598 [2024-11-29 12:06:04.943951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.598 [2024-11-29 12:06:04.943968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.598 [2024-11-29 12:06:04.948774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.598 [2024-11-29 12:06:04.948823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.598 [2024-11-29 12:06:04.948838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.598 [2024-11-29 12:06:04.953820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.598 [2024-11-29 12:06:04.954031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.598 [2024-11-29 12:06:04.954050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.598 [2024-11-29 12:06:04.958879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.598 [2024-11-29 12:06:04.958947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.598 [2024-11-29 12:06:04.958972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:04.963838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:04.963907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:04.963922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:04.968969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:04.969176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:04.969196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:04.973913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:04.973962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:04.973978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:04.978935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:04.978984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:04.979000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:04.983944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:04.984163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:04.984185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:04.989104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:04.989151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:04.989167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:04.993933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:04.993982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:04.993997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:04.998904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:04.999092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:04.999112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:05.003977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:05.004032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:05.004047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:05.008670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:05.008716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:05.008731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:05.013483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:05.013547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:05.013562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:05.018539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:05.018595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:05.018612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:05.023370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:05.023418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:05.023433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:05.028089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:05.028147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:05.028162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:05.032807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:05.032857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:05.032873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:05.037706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:05.037887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:05.037907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:05.042549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:05.042593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:05.042608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:05.047336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:05.047380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:05.047395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:05.052376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:05.052428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:05.052444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.599 [2024-11-29 12:06:05.057197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.599 [2024-11-29 12:06:05.057389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.599 [2024-11-29 12:06:05.057409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.600 [2024-11-29 12:06:05.062175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.600 [2024-11-29 12:06:05.062223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.600 [2024-11-29 12:06:05.062238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.600 [2024-11-29 12:06:05.067174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.600 [2024-11-29 12:06:05.067225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.600 [2024-11-29 12:06:05.067240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.600 [2024-11-29 12:06:05.072163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.600 [2024-11-29 12:06:05.072216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.600 [2024-11-29 12:06:05.072232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.600 [2024-11-29 12:06:05.077075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.600 [2024-11-29 12:06:05.077297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.600 [2024-11-29 12:06:05.077317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.600 [2024-11-29 12:06:05.082191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.600 [2024-11-29 12:06:05.082250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.600 [2024-11-29 12:06:05.082267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.600 [2024-11-29 12:06:05.087186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.600 [2024-11-29 12:06:05.087238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.600 [2024-11-29 12:06:05.087254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.600 [2024-11-29 12:06:05.092228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.600 [2024-11-29 12:06:05.092440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.600 [2024-11-29 12:06:05.092461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.600 [2024-11-29 12:06:05.097421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.600 [2024-11-29 12:06:05.097470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.600 [2024-11-29 12:06:05.097485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.600 [2024-11-29 12:06:05.102559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.600 [2024-11-29 12:06:05.102614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.600 [2024-11-29 12:06:05.102629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.107734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.107791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.107806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.112539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.112587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.112609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.117415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.117465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.117480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.122146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.122200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.122216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.126877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.127072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.127093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.131839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.131892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.131907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.136716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.136763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.136779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.141600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.141653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.141667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.146302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.146353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.146368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.151081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.151125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.151140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.155975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.156020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.156034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.160725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.160772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.160787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.165452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.165497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.165527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.170192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.170236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.170251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.175095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.175145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.175160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.179833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.180031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.180052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.184772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.184823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.184839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.189564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.189611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.189626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.194270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.194318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.194332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.199027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.199224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.199243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.203993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.204050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.204065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.208740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.208791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.208807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.213504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.213565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.213579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.218667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.218722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.218737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.223835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.223889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.223905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.228688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.228736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.228751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.233562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.233610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.233625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.238425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.238626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.238647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.243257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.243305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.243320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.248120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.248166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.248182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.253069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.253131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.253154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.257968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.258171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.258192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.262910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.262953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.262967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.267575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.267614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.267628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.272477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.272560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.272587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.277411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.277457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.277472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.282363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.282412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.282428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.287183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.287226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.287241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.292188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.292241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.292256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.297112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.297158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.297174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.302015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.302060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.302074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.307008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.307192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.307212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.312153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.312196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.312211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.317053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.861 [2024-11-29 12:06:05.317096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-11-29 12:06:05.317111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.861 [2024-11-29 12:06:05.321908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.862 [2024-11-29 12:06:05.322072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.862 [2024-11-29 12:06:05.322091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.862 [2024-11-29 12:06:05.326960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.862 [2024-11-29 12:06:05.327005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.862 [2024-11-29 12:06:05.327020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.862 [2024-11-29 12:06:05.332124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.862 [2024-11-29 12:06:05.332171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.862 [2024-11-29 12:06:05.332186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.862 [2024-11-29 12:06:05.336933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.862 [2024-11-29 12:06:05.337125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.862 [2024-11-29 12:06:05.337144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.862 [2024-11-29 12:06:05.342255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.862 [2024-11-29 12:06:05.342305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.862 [2024-11-29 12:06:05.342321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.862 [2024-11-29 12:06:05.347237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.862 [2024-11-29 12:06:05.347288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.862 [2024-11-29 12:06:05.347303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:59.862 [2024-11-29 12:06:05.352355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.862 [2024-11-29 12:06:05.352407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.862 [2024-11-29 12:06:05.352423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.862 [2024-11-29 12:06:05.357038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.862 [2024-11-29 12:06:05.357088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.862 [2024-11-29 12:06:05.357103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.862 [2024-11-29 12:06:05.362244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.862 [2024-11-29 12:06:05.362314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.862 [2024-11-29 12:06:05.362339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:59.862 [2024-11-29 12:06:05.367718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:21:59.862 [2024-11-29 12:06:05.367774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.862 [2024-11-29 12:06:05.367790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.373280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.373343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.373363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.378563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.378614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.378629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.383583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.383631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.383646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.388618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.388668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.388684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.393621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.393667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.393688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.398723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.398773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.398788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.403497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.403585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.403599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.408454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.408533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.408550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.413466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.413702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.413734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.418685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.418731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.418746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.423455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.423544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.423571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.428305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.428364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.428379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.433080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.433131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.433146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.437995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.438047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.438069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.443050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.443104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.443119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.447976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.448174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.448195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.453157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.453207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.453223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.458122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.458172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.458187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.463038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.463263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.463282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.468006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.468059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.468075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.472816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.472867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.472883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.477614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.122 [2024-11-29 12:06:05.477659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.122 [2024-11-29 12:06:05.477673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.122 [2024-11-29 12:06:05.482330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.482380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.482395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.487110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.487158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.487172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.492253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.492305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.492330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.497295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.497530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.497551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.502395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.502442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.502457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.507288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.507344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.507359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.512383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.512429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.512446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.517287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.517471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.517493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.522572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.522620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.522635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.527612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.527669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.527684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.532554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.532607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.532621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.537461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.537533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.537550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.542471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.542534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.542550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.547657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.547705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.547718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.552657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.552725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.552740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.557657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.557746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.557762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.562471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.562543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.562559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.567503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.567583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.567598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.572620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.572678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.572705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.577376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.577426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.577441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.582303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.582353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.582368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.587176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.587376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.587396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.592128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.592178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.592193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.596853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.596903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.596917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.601579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.601626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.601641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.606295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.606352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.606367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.123 [2024-11-29 12:06:05.611017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.123 [2024-11-29 12:06:05.611192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.123 [2024-11-29 12:06:05.611212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.124 [2024-11-29 12:06:05.615945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.124 [2024-11-29 12:06:05.615994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.124 [2024-11-29 12:06:05.616009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.124 [2024-11-29 12:06:05.620603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.124 [2024-11-29 12:06:05.620647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.124 [2024-11-29 12:06:05.620661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.124 [2024-11-29 12:06:05.625257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.124 [2024-11-29 12:06:05.625303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.124 [2024-11-29 12:06:05.625318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.385 [2024-11-29 12:06:05.630035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.385 [2024-11-29 12:06:05.630248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.385 [2024-11-29 12:06:05.630268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.385 [2024-11-29 12:06:05.635121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.385 [2024-11-29 12:06:05.635182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.385 [2024-11-29 12:06:05.635198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.385 [2024-11-29 12:06:05.639933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.385 [2024-11-29 12:06:05.639986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.385 [2024-11-29 12:06:05.640002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.385 [2024-11-29 12:06:05.644600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.385 [2024-11-29 12:06:05.644652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.385 [2024-11-29 12:06:05.644667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.385 [2024-11-29 12:06:05.649657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.385 [2024-11-29 12:06:05.649731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.385 [2024-11-29 12:06:05.649746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.385 [2024-11-29 12:06:05.654688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.385 [2024-11-29 12:06:05.654746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.385 [2024-11-29 12:06:05.654761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.385 [2024-11-29 12:06:05.659897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.385 [2024-11-29 12:06:05.660140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.385 [2024-11-29 12:06:05.660160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.385 [2024-11-29 12:06:05.665262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.385 [2024-11-29 12:06:05.665320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.385 [2024-11-29 12:06:05.665340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.385 [2024-11-29 12:06:05.670369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.385 [2024-11-29 12:06:05.670426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.385 [2024-11-29 12:06:05.670442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.385 [2024-11-29 12:06:05.675440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.385 [2024-11-29 12:06:05.675733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.385 [2024-11-29 12:06:05.675754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.385 [2024-11-29 12:06:05.680669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.385 [2024-11-29 12:06:05.680744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.385 [2024-11-29 12:06:05.680760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.385 [2024-11-29 12:06:05.685741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.385 [2024-11-29 12:06:05.685791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.385 [2024-11-29 12:06:05.685806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.385 [2024-11-29 12:06:05.690557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.690605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.690620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.695360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.695420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.695435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.700208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.700257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.700272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.704991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.705041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.705056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.709839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.710049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.710069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.714729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.714780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.714796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.719561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.719611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.719625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.724298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.724351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.724367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.729079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.729295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.729317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.734057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.734115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.734131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.738904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.738958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.738974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.743727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.743785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.743799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.748586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.748636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.748651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.753427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.753474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.753489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.758238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.758297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.758312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.762953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.763172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.763191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.767965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.768016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.768031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.772620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.772668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.772682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.777341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.777397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.386 [2024-11-29 12:06:05.777413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.386 [2024-11-29 12:06:05.782100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.386 [2024-11-29 12:06:05.782299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.782319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.787064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.787122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.787138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.791857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.791907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.791922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.796644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.796697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.796712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.801392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.801444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.801458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.806086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.806138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.806152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.810853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.810900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.810915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.815663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.815709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.815723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.820425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.820473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.820487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.825141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.825189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.825204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.829816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.829860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.829875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.834531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.834577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.834592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.839251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.839296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.839311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.844062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.844109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.844124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.848850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.848895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.848910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.853505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.853562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.853577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.858365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.858419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.387 [2024-11-29 12:06:05.858434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.387 [2024-11-29 12:06:05.863122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.387 [2024-11-29 12:06:05.863169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.388 [2024-11-29 12:06:05.863184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.388 [2024-11-29 12:06:05.867868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.388 [2024-11-29 12:06:05.867910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.388 [2024-11-29 12:06:05.867925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.388 [2024-11-29 12:06:05.872662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.388 [2024-11-29 12:06:05.872704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.388 [2024-11-29 12:06:05.872722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.388 [2024-11-29 12:06:05.877371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.388 [2024-11-29 12:06:05.877415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.388 [2024-11-29 12:06:05.877430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.388 [2024-11-29 12:06:05.882209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.388 [2024-11-29 12:06:05.882253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.388 [2024-11-29 12:06:05.882268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.388 [2024-11-29 12:06:05.886907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.388 [2024-11-29 12:06:05.886949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.388 [2024-11-29 12:06:05.886963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.388 [2024-11-29 12:06:05.891648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.388 [2024-11-29 12:06:05.891690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.388 [2024-11-29 12:06:05.891705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.896292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.896346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.896360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.901013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.901054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.901068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.905627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.905667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.905681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.910278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.910318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.910331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.915022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.915196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.915215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.919901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.919943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.919957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.924570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.924610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.924624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.929256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.929302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.929316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.933941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.934108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.934128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.938682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.938727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.938742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.943305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.943347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.943363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.948083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.948132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.948146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.952785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.952958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.952978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.957789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.957832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.957846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.962627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.962671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.962686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.967392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.967436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.967450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.972157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.972199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.972214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.976911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.976953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.976968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.981608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.981649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.981664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.986354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.986399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.986414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.991105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.991280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.991299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:05.996062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:05.996106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:05.996122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.649 [2024-11-29 12:06:06.000796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.649 [2024-11-29 12:06:06.000838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.649 [2024-11-29 12:06:06.000853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.005525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.005566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.005580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.010152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.010322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.010343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.015037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.015085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.015109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.019747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.019793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.019807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.024471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.024529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.024545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.029171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.029223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.029239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.033841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.034064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.034086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.038744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.038791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.038807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.043333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.043379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.043393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.048135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.048184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.048199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.052918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.053128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.053148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.057838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.057888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.057902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.062576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.062625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.062640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.067372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.067427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.067442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.072072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.072264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.072283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.077013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.077056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.077070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.081709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.081750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.081765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.086365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.086419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.086434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.091079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.091283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.091303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.095980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.096028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.096043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.100897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.100942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.100956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.105790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.105964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.105983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.110823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.110867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.110882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.115879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.115922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.115937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.120810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.120977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.120997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.126069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.126114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.650 [2024-11-29 12:06:06.126129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.650 [2024-11-29 12:06:06.131064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.650 [2024-11-29 12:06:06.131109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.651 [2024-11-29 12:06:06.131133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.651 [2024-11-29 12:06:06.135894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.651 [2024-11-29 12:06:06.136085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.651 [2024-11-29 12:06:06.136105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.651 [2024-11-29 12:06:06.140833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.651 [2024-11-29 12:06:06.140882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.651 [2024-11-29 12:06:06.140896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.651 [2024-11-29 12:06:06.145752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.651 [2024-11-29 12:06:06.145796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.651 [2024-11-29 12:06:06.145812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.651 [2024-11-29 12:06:06.150542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.651 [2024-11-29 12:06:06.150585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.651 [2024-11-29 12:06:06.150599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.651 [2024-11-29 12:06:06.155301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.651 [2024-11-29 12:06:06.155345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.651 [2024-11-29 12:06:06.155360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.912 [2024-11-29 12:06:06.160191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.912 [2024-11-29 12:06:06.160244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.912 [2024-11-29 12:06:06.160259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.912 [2024-11-29 12:06:06.164967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.912 [2024-11-29 12:06:06.165016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.912 [2024-11-29 12:06:06.165030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.912 [2024-11-29 12:06:06.169792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.912 [2024-11-29 12:06:06.169975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.912 [2024-11-29 12:06:06.169995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.912 [2024-11-29 12:06:06.174852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.912 [2024-11-29 12:06:06.174905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.912 [2024-11-29 12:06:06.174920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.912 [2024-11-29 12:06:06.179780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.912 [2024-11-29 12:06:06.179834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.912 [2024-11-29 12:06:06.179850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.912 [2024-11-29 12:06:06.184666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.912 [2024-11-29 12:06:06.184713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.912 [2024-11-29 12:06:06.184728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.912 [2024-11-29 12:06:06.189403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.912 [2024-11-29 12:06:06.189454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.912 [2024-11-29 12:06:06.189469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.912 [2024-11-29 12:06:06.194084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.912 [2024-11-29 12:06:06.194133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.912 [2024-11-29 12:06:06.194159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.912 [2024-11-29 12:06:06.198993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.912 [2024-11-29 12:06:06.199043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.912 [2024-11-29 12:06:06.199060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.912 [2024-11-29 12:06:06.203901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.912 [2024-11-29 12:06:06.204104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.912 [2024-11-29 12:06:06.204125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.912 [2024-11-29 12:06:06.208820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.912 [2024-11-29 12:06:06.208868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.912 [2024-11-29 12:06:06.208893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.912 [2024-11-29 12:06:06.213741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.912 [2024-11-29 12:06:06.213785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.912 [2024-11-29 12:06:06.213800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.912 [2024-11-29 12:06:06.218446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.912 [2024-11-29 12:06:06.218496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.912 [2024-11-29 12:06:06.218527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.912 [2024-11-29 12:06:06.223178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.912 [2024-11-29 12:06:06.223365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.912 [2024-11-29 12:06:06.223385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.912 [2024-11-29 12:06:06.228167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.912 [2024-11-29 12:06:06.228209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.912 [2024-11-29 12:06:06.228224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.232904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.232944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.232958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.237736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.237777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.237792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.242435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.242478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.242492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.247256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.247428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.247447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.252419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.252463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.252478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.257204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.257250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.257264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.261930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.261973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.261988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.266736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.266906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.266927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.271668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.271731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.271747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.276371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.276413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.276427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.281091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.281132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.281146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.285750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.285923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.285942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.290668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.290720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.290741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.295597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.295638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.295652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.300397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.300441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.300455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.305523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.305564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.305580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.310496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.310548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.310564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.315470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.315538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.315555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.320478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.320532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.320548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.325501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.325554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.325569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.330369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.330414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.330429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.335200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.335243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.335257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.340053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.340095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.340110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.344943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.345135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.345155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.349888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.349932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.349947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.354590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.354631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.913 [2024-11-29 12:06:06.354646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.913 [2024-11-29 12:06:06.359353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.913 [2024-11-29 12:06:06.359407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.914 [2024-11-29 12:06:06.359423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.914 [2024-11-29 12:06:06.364156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.914 [2024-11-29 12:06:06.364334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.914 [2024-11-29 12:06:06.364353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.914 [2024-11-29 12:06:06.369058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.914 [2024-11-29 12:06:06.369103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.914 [2024-11-29 12:06:06.369119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.914 [2024-11-29 12:06:06.373847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.914 [2024-11-29 12:06:06.373902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.914 [2024-11-29 12:06:06.373917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.914 [2024-11-29 12:06:06.378660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.914 [2024-11-29 12:06:06.378703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.914 [2024-11-29 12:06:06.378719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.914 [2024-11-29 12:06:06.383599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.914 [2024-11-29 12:06:06.383643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.914 [2024-11-29 12:06:06.383658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.914 [2024-11-29 12:06:06.388381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.914 [2024-11-29 12:06:06.388428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.914 [2024-11-29 12:06:06.388442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.914 [2024-11-29 12:06:06.393137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.914 [2024-11-29 12:06:06.393178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.914 [2024-11-29 12:06:06.393193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.914 [2024-11-29 12:06:06.397923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.914 [2024-11-29 12:06:06.398105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.914 [2024-11-29 12:06:06.398124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:00.914 [2024-11-29 12:06:06.403003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.914 [2024-11-29 12:06:06.403056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.914 [2024-11-29 12:06:06.403071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.914 [2024-11-29 12:06:06.407788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.914 [2024-11-29 12:06:06.407836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.914 [2024-11-29 12:06:06.407851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:00.914 [2024-11-29 12:06:06.412572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.914 [2024-11-29 12:06:06.412617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.914 [2024-11-29 12:06:06.412632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:00.914 [2024-11-29 12:06:06.417342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:00.914 [2024-11-29 12:06:06.417386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.914 [2024-11-29 12:06:06.417400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.422156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.175 [2024-11-29 12:06:06.422202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.175 [2024-11-29 12:06:06.422217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.426944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.175 [2024-11-29 12:06:06.426989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.175 [2024-11-29 12:06:06.427003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.431692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.175 [2024-11-29 12:06:06.431879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.175 [2024-11-29 12:06:06.431899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.436624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.175 [2024-11-29 12:06:06.436670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.175 [2024-11-29 12:06:06.436685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.441355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.175 [2024-11-29 12:06:06.441400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.175 [2024-11-29 12:06:06.441414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.446082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.175 [2024-11-29 12:06:06.446123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.175 [2024-11-29 12:06:06.446137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.450903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.175 [2024-11-29 12:06:06.451088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.175 [2024-11-29 12:06:06.451110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.455898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.175 [2024-11-29 12:06:06.455943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.175 [2024-11-29 12:06:06.455957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.460632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.175 [2024-11-29 12:06:06.460679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.175 [2024-11-29 12:06:06.460704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.465361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.175 [2024-11-29 12:06:06.465411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.175 [2024-11-29 12:06:06.465426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.470115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.175 [2024-11-29 12:06:06.470299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.175 [2024-11-29 12:06:06.470331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.475106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.175 [2024-11-29 12:06:06.475156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.175 [2024-11-29 12:06:06.475173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.479878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.175 [2024-11-29 12:06:06.479950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.175 [2024-11-29 12:06:06.479966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.484779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.175 [2024-11-29 12:06:06.484976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.175 [2024-11-29 12:06:06.484995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.489811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.175 [2024-11-29 12:06:06.489855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.175 [2024-11-29 12:06:06.489870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.175 [2024-11-29 12:06:06.494629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.494672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.494686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.499333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.499379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.499394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.504109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.504301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.504320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.509066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.509122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.509138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.513917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.513968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.513983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.518972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.519164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.519185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.524155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.524201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.524217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.529073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.529117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.529133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.534110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.534299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.534326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.539315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.539364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.539379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.544359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.544409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.544424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.549371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.549573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.549594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.554605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.554653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.554674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.559392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.559440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.559455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.564323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.564371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.564386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.569153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.569330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.569350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.574067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.574111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.574125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.578711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.578752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.578766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.583468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.583543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.583560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.588238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.588412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.588431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.593192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.176 [2024-11-29 12:06:06.593237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.176 [2024-11-29 12:06:06.593252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.176 [2024-11-29 12:06:06.598247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.598292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.598306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.603249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.603294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.603308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.608152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.608340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.608359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.613144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.613189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.613205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.617922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.617965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.617979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.622733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.622777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.622792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.627457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.627502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.627554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.632323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.632366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.632381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.637129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.637172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.637187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.641889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.641929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.641943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.646666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.646723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.646737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.651837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.651897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.651912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.656845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.657025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.657044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.661848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.661897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.661912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.666635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.666679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.666704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.671647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.671690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.671704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.676557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.676600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.676614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.177 [2024-11-29 12:06:06.681458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.177 [2024-11-29 12:06:06.681504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.177 [2024-11-29 12:06:06.681533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.438 [2024-11-29 12:06:06.686371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.438 [2024-11-29 12:06:06.686418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.438 [2024-11-29 12:06:06.686432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.438 [2024-11-29 12:06:06.691495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.438 [2024-11-29 12:06:06.691567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.438 [2024-11-29 12:06:06.691582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.438 [2024-11-29 12:06:06.696569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.438 [2024-11-29 12:06:06.696612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.438 [2024-11-29 12:06:06.696627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.438 [2024-11-29 12:06:06.701678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.438 [2024-11-29 12:06:06.701720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.438 [2024-11-29 12:06:06.701735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.438 [2024-11-29 12:06:06.706821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.438 [2024-11-29 12:06:06.706998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.438 [2024-11-29 12:06:06.707019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.438 [2024-11-29 12:06:06.712096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.438 [2024-11-29 12:06:06.712144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.438 [2024-11-29 12:06:06.712159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.438 [2024-11-29 12:06:06.716904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.438 [2024-11-29 12:06:06.716952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.438 [2024-11-29 12:06:06.716968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.438 [2024-11-29 12:06:06.721825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.438 [2024-11-29 12:06:06.722012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.438 [2024-11-29 12:06:06.722032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.438 [2024-11-29 12:06:06.726790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.438 [2024-11-29 12:06:06.726837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.438 [2024-11-29 12:06:06.726852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.438 [2024-11-29 12:06:06.731441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.438 [2024-11-29 12:06:06.731484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.438 [2024-11-29 12:06:06.731499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.438 [2024-11-29 12:06:06.736827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.438 [2024-11-29 12:06:06.737017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.438 [2024-11-29 12:06:06.737038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.438 [2024-11-29 12:06:06.741819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.438 [2024-11-29 12:06:06.741866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.438 [2024-11-29 12:06:06.741881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.438 [2024-11-29 12:06:06.746616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.746659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.746674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.751223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.751268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.751284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.756074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.756262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.756281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.760914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.760958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.760974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.765572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.765614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.765628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.770335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.770378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.770393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.775174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.775357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.775376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.780095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.780141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.780156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.784723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.784763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.784778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.789433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.789474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.789489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.794135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.794177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.794198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.798853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.799034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.799054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.803743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.803788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.803803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.808434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.808478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.808492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.813052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.813095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.813112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.817895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.818070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.818089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.822796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.822843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.822858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.827489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.827573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.827589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.832324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.832374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.832389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.836988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.837211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.837231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.841917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.841967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.841983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.846656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.846718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.846740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.851406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.851454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.851470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.856264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.856448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.856467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.861056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.861104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.861119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.865738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.865784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.439 [2024-11-29 12:06:06.865800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.439 [2024-11-29 12:06:06.870316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.439 [2024-11-29 12:06:06.870378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.440 [2024-11-29 12:06:06.870393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.440 [2024-11-29 12:06:06.875066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.440 [2024-11-29 12:06:06.875112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.440 [2024-11-29 12:06:06.875127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.440 [2024-11-29 12:06:06.879815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.440 [2024-11-29 12:06:06.879988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.440 [2024-11-29 12:06:06.880007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:01.440 [2024-11-29 12:06:06.884624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.440 [2024-11-29 12:06:06.884665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.440 [2024-11-29 12:06:06.884679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:01.440 [2024-11-29 12:06:06.889329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.440 [2024-11-29 12:06:06.889370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.440 [2024-11-29 12:06:06.889385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.440 [2024-11-29 12:06:06.893964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x108f5b0) 00:22:01.440 [2024-11-29 12:06:06.894006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.440 [2024-11-29 12:06:06.894021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:01.440 00:22:01.440 Latency(us) 00:22:01.440 [2024-11-29T12:06:06.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.440 [2024-11-29T12:06:06.951Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:01.440 nvme0n1 : 2.00 6340.46 792.56 0.00 0.00 2520.48 2204.39 6762.12 00:22:01.440 [2024-11-29T12:06:06.951Z] =================================================================================================================== 00:22:01.440 [2024-11-29T12:06:06.951Z] Total : 6340.46 792.56 0.00 0.00 2520.48 2204.39 6762.12 00:22:01.440 0 00:22:01.440 12:06:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:01.440 12:06:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:01.440 12:06:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:01.440 | .driver_specific 00:22:01.440 | .nvme_error 00:22:01.440 | .status_code 00:22:01.440 | .command_transient_transport_error' 00:22:01.440 12:06:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:02.008 12:06:07 -- host/digest.sh@71 -- # (( 409 > 0 )) 00:22:02.008 12:06:07 -- host/digest.sh@73 -- # killprocess 84513 00:22:02.008 12:06:07 -- common/autotest_common.sh@936 -- # '[' -z 84513 ']' 00:22:02.008 12:06:07 -- common/autotest_common.sh@940 -- # kill -0 84513 00:22:02.008 12:06:07 -- common/autotest_common.sh@941 -- # uname 00:22:02.008 12:06:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:02.008 12:06:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84513 00:22:02.008 killing process with pid 84513 00:22:02.008 Received shutdown signal, test time was about 2.000000 seconds 00:22:02.008 00:22:02.008 Latency(us) 00:22:02.008 [2024-11-29T12:06:07.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.008 [2024-11-29T12:06:07.519Z] =================================================================================================================== 00:22:02.008 [2024-11-29T12:06:07.519Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.008 12:06:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:02.008 12:06:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:02.008 12:06:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84513' 00:22:02.008 12:06:07 -- common/autotest_common.sh@955 -- # kill 84513 00:22:02.008 12:06:07 -- common/autotest_common.sh@960 -- # wait 84513 00:22:02.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:02.008 12:06:07 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:02.008 12:06:07 -- host/digest.sh@54 -- # local rw bs qd 00:22:02.008 12:06:07 -- host/digest.sh@56 -- # rw=randwrite 00:22:02.008 12:06:07 -- host/digest.sh@56 -- # bs=4096 00:22:02.008 12:06:07 -- host/digest.sh@56 -- # qd=128 00:22:02.008 12:06:07 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:02.008 12:06:07 -- host/digest.sh@58 -- # bperfpid=84574 00:22:02.008 12:06:07 -- host/digest.sh@60 -- # waitforlisten 84574 /var/tmp/bperf.sock 00:22:02.008 12:06:07 -- common/autotest_common.sh@829 -- # '[' -z 84574 ']' 00:22:02.008 12:06:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:02.008 12:06:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.008 12:06:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:02.008 12:06:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.008 12:06:07 -- common/autotest_common.sh@10 -- # set +x 00:22:02.008 [2024-11-29 12:06:07.500326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:02.008 [2024-11-29 12:06:07.500680] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84574 ] 00:22:02.267 [2024-11-29 12:06:07.633998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.267 [2024-11-29 12:06:07.729446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.204 12:06:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.204 12:06:08 -- common/autotest_common.sh@862 -- # return 0 00:22:03.204 12:06:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:03.204 12:06:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:03.463 12:06:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:03.463 12:06:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.463 12:06:08 -- common/autotest_common.sh@10 -- # set +x 00:22:03.463 12:06:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.463 12:06:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:03.463 12:06:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:03.721 nvme0n1 00:22:03.721 12:06:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:03.721 12:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.721 12:06:09 -- common/autotest_common.sh@10 -- # set +x 00:22:03.721 12:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.722 12:06:09 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:03.722 12:06:09 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:03.981 Running I/O for 2 seconds... 00:22:03.981 [2024-11-29 12:06:09.261022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ddc00 00:22:03.981 [2024-11-29 12:06:09.262759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.981 [2024-11-29 12:06:09.262807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:03.981 [2024-11-29 12:06:09.277342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fef90 00:22:03.981 [2024-11-29 12:06:09.278630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.981 [2024-11-29 12:06:09.278829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.981 [2024-11-29 12:06:09.293198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ff3c8 00:22:03.981 [2024-11-29 12:06:09.294485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.981 [2024-11-29 12:06:09.294688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:03.981 [2024-11-29 12:06:09.308911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190feb58 00:22:03.981 [2024-11-29 12:06:09.310196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.981 [2024-11-29 12:06:09.310381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:03.981 [2024-11-29 12:06:09.324698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fe720 00:22:03.981 [2024-11-29 12:06:09.326253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.981 [2024-11-29 12:06:09.326462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:03.981 [2024-11-29 12:06:09.340677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fe2e8 00:22:03.981 [2024-11-29 12:06:09.342169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.981 [2024-11-29 12:06:09.342414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:03.981 [2024-11-29 12:06:09.356590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fdeb0 00:22:03.981 [2024-11-29 12:06:09.358121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.981 [2024-11-29 12:06:09.358351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:03.981 [2024-11-29 12:06:09.372656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fda78 00:22:03.981 [2024-11-29 12:06:09.374167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.981 [2024-11-29 12:06:09.374382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:03.981 [2024-11-29 12:06:09.388711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fd640 00:22:03.981 [2024-11-29 12:06:09.390249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.981 [2024-11-29 12:06:09.390462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:03.981 [2024-11-29 12:06:09.404792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fd208 00:22:03.981 [2024-11-29 12:06:09.406314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.981 [2024-11-29 12:06:09.406544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:03.981 [2024-11-29 12:06:09.420743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fcdd0 00:22:03.981 [2024-11-29 12:06:09.422172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.981 [2024-11-29 12:06:09.422224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:03.981 [2024-11-29 12:06:09.436590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fc998 00:22:03.981 [2024-11-29 12:06:09.437827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.981 [2024-11-29 12:06:09.437880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:03.981 [2024-11-29 12:06:09.452458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fc560 00:22:03.981 [2024-11-29 12:06:09.454009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.981 [2024-11-29 12:06:09.454074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:03.981 [2024-11-29 12:06:09.468611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fc128 00:22:03.981 [2024-11-29 12:06:09.469837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.981 [2024-11-29 12:06:09.469890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:03.981 [2024-11-29 12:06:09.484235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fbcf0 00:22:03.981 [2024-11-29 12:06:09.485651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.981 [2024-11-29 12:06:09.485694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:04.241 [2024-11-29 12:06:09.500105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fb8b8 00:22:04.241 [2024-11-29 12:06:09.501238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.241 [2024-11-29 12:06:09.501445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:04.241 [2024-11-29 12:06:09.515495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fb480 00:22:04.241 [2024-11-29 12:06:09.516677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.241 [2024-11-29 12:06:09.516725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:04.241 [2024-11-29 12:06:09.530980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fb048 00:22:04.241 [2024-11-29 12:06:09.532116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.241 [2024-11-29 12:06:09.532293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:04.241 [2024-11-29 12:06:09.546390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fac10 00:22:04.241 [2024-11-29 12:06:09.547736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.241 [2024-11-29 12:06:09.547777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:04.241 [2024-11-29 12:06:09.562149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fa7d8 00:22:04.241 [2024-11-29 12:06:09.563427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.241 [2024-11-29 12:06:09.563633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:04.241 [2024-11-29 12:06:09.577999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190fa3a0 00:22:04.241 [2024-11-29 12:06:09.579170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.241 [2024-11-29 12:06:09.579346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:04.241 [2024-11-29 12:06:09.594448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f9f68 00:22:04.241 [2024-11-29 12:06:09.595686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.241 [2024-11-29 12:06:09.595740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:04.241 [2024-11-29 12:06:09.610792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f9b30 00:22:04.241 [2024-11-29 12:06:09.611933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.241 [2024-11-29 12:06:09.611979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:04.241 [2024-11-29 12:06:09.626453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f96f8 00:22:04.241 [2024-11-29 12:06:09.627625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.241 [2024-11-29 12:06:09.627681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:04.241 [2024-11-29 12:06:09.642931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f92c0 00:22:04.241 [2024-11-29 12:06:09.644118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.241 [2024-11-29 12:06:09.644173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:04.241 [2024-11-29 12:06:09.659092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f8e88 00:22:04.241 [2024-11-29 12:06:09.660202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.241 [2024-11-29 12:06:09.660257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:04.241 [2024-11-29 12:06:09.674619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f8a50 00:22:04.241 [2024-11-29 12:06:09.675647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.241 [2024-11-29 12:06:09.675690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:04.241 [2024-11-29 12:06:09.690139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f8618 00:22:04.241 [2024-11-29 12:06:09.691168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.241 [2024-11-29 12:06:09.691210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:04.241 [2024-11-29 12:06:09.705722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f81e0 00:22:04.241 [2024-11-29 12:06:09.706808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.241 [2024-11-29 12:06:09.706851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:04.242 [2024-11-29 12:06:09.721284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f7da8 00:22:04.242 [2024-11-29 12:06:09.722272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.242 [2024-11-29 12:06:09.722329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:04.242 [2024-11-29 12:06:09.736885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f7970 00:22:04.242 [2024-11-29 12:06:09.737854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.242 [2024-11-29 12:06:09.737899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:04.501 [2024-11-29 12:06:09.753128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f7538 00:22:04.501 [2024-11-29 12:06:09.754184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.501 [2024-11-29 12:06:09.754228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:04.501 [2024-11-29 12:06:09.769386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f7100 00:22:04.501 [2024-11-29 12:06:09.770431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.501 [2024-11-29 12:06:09.770635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.501 [2024-11-29 12:06:09.785199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f6cc8 00:22:04.501 [2024-11-29 12:06:09.786164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.501 [2024-11-29 12:06:09.786209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:04.501 [2024-11-29 12:06:09.800859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f6890 00:22:04.501 [2024-11-29 12:06:09.801783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.501 [2024-11-29 12:06:09.801827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:04.501 [2024-11-29 12:06:09.816535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f6458 00:22:04.501 [2024-11-29 12:06:09.817439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.501 [2024-11-29 12:06:09.817483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:04.501 [2024-11-29 12:06:09.832151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f6020 00:22:04.501 [2024-11-29 12:06:09.833287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.501 [2024-11-29 12:06:09.833332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:04.501 [2024-11-29 12:06:09.847641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f5be8 00:22:04.501 [2024-11-29 12:06:09.848532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.501 [2024-11-29 12:06:09.848576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:04.501 [2024-11-29 12:06:09.862842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f57b0 00:22:04.501 [2024-11-29 12:06:09.863724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.501 [2024-11-29 12:06:09.863767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:04.501 [2024-11-29 12:06:09.878003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f5378 00:22:04.501 [2024-11-29 12:06:09.879075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.501 [2024-11-29 12:06:09.879116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:04.501 [2024-11-29 12:06:09.893500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f4f40 00:22:04.501 [2024-11-29 12:06:09.894390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.501 [2024-11-29 12:06:09.894433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:04.501 [2024-11-29 12:06:09.908683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f4b08 00:22:04.501 [2024-11-29 12:06:09.909496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.501 [2024-11-29 12:06:09.909551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:04.501 [2024-11-29 12:06:09.924781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f46d0 00:22:04.501 [2024-11-29 12:06:09.925737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.501 [2024-11-29 12:06:09.925783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:04.501 [2024-11-29 12:06:09.941164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f4298 00:22:04.502 [2024-11-29 12:06:09.942074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.502 [2024-11-29 12:06:09.942128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:04.502 [2024-11-29 12:06:09.957156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f3e60 00:22:04.502 [2024-11-29 12:06:09.958008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.502 [2024-11-29 12:06:09.958049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:04.502 [2024-11-29 12:06:09.972816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f3a28 00:22:04.502 [2024-11-29 12:06:09.973622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.502 [2024-11-29 12:06:09.973665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:04.502 [2024-11-29 12:06:09.988280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f35f0 00:22:04.502 [2024-11-29 12:06:09.989338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.502 [2024-11-29 12:06:09.989378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:04.502 [2024-11-29 12:06:10.004119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f31b8 00:22:04.502 [2024-11-29 12:06:10.005048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.502 [2024-11-29 12:06:10.005091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.019885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f2d80 00:22:04.761 [2024-11-29 12:06:10.020687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.020728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.035489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f2948 00:22:04.761 [2024-11-29 12:06:10.036378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.036419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.051558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f2510 00:22:04.761 [2024-11-29 12:06:10.052392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.052432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.066864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f20d8 00:22:04.761 [2024-11-29 12:06:10.067676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.067716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.082349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f1ca0 00:22:04.761 [2024-11-29 12:06:10.083378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.083424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.098307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f1868 00:22:04.761 [2024-11-29 12:06:10.099223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.099265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.114125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f1430 00:22:04.761 [2024-11-29 12:06:10.115029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.115078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.130094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f0ff8 00:22:04.761 [2024-11-29 12:06:10.130820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.130862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.145237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f0bc0 00:22:04.761 [2024-11-29 12:06:10.145948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.145990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.160533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f0788 00:22:04.761 [2024-11-29 12:06:10.161232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.161271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.175807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190f0350 00:22:04.761 [2024-11-29 12:06:10.176754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.176790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.191225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190eff18 00:22:04.761 [2024-11-29 12:06:10.192158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.192193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.207120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190efae0 00:22:04.761 [2024-11-29 12:06:10.207929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.207966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.223156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ef6a8 00:22:04.761 [2024-11-29 12:06:10.223960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.224001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.238718] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ef270 00:22:04.761 [2024-11-29 12:06:10.239368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.239412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:04.761 [2024-11-29 12:06:10.253858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190eee38 00:22:04.761 [2024-11-29 12:06:10.254701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.761 [2024-11-29 12:06:10.254740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:05.020 [2024-11-29 12:06:10.269326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190eea00 00:22:05.020 [2024-11-29 12:06:10.270168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.020 [2024-11-29 12:06:10.270207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.020 [2024-11-29 12:06:10.284798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ee5c8 00:22:05.020 [2024-11-29 12:06:10.285419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.020 [2024-11-29 12:06:10.285458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:05.020 [2024-11-29 12:06:10.300196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ee190 00:22:05.020 [2024-11-29 12:06:10.301063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.020 [2024-11-29 12:06:10.301101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:05.020 [2024-11-29 12:06:10.316081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190edd58 00:22:05.020 [2024-11-29 12:06:10.316743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.020 [2024-11-29 12:06:10.316782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:05.020 [2024-11-29 12:06:10.331436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ed920 00:22:05.020 [2024-11-29 12:06:10.332304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.020 [2024-11-29 12:06:10.332352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:05.020 [2024-11-29 12:06:10.347056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ed4e8 00:22:05.020 [2024-11-29 12:06:10.347892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.020 [2024-11-29 12:06:10.347932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:05.020 [2024-11-29 12:06:10.363125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ed0b0 00:22:05.021 [2024-11-29 12:06:10.363833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.021 [2024-11-29 12:06:10.363875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:05.021 [2024-11-29 12:06:10.378845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ecc78 00:22:05.021 [2024-11-29 12:06:10.379410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.021 [2024-11-29 12:06:10.379446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:05.021 [2024-11-29 12:06:10.394116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ec840 00:22:05.021 [2024-11-29 12:06:10.394673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.021 [2024-11-29 12:06:10.394713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:05.021 [2024-11-29 12:06:10.409392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ec408 00:22:05.021 [2024-11-29 12:06:10.410029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.021 [2024-11-29 12:06:10.410252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:05.021 [2024-11-29 12:06:10.425555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ebfd0 00:22:05.021 [2024-11-29 12:06:10.426136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.021 [2024-11-29 12:06:10.426181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:05.021 [2024-11-29 12:06:10.441209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ebb98 00:22:05.021 [2024-11-29 12:06:10.441814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.021 [2024-11-29 12:06:10.441863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:05.021 [2024-11-29 12:06:10.456544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190eb760 00:22:05.021 [2024-11-29 12:06:10.457324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.021 [2024-11-29 12:06:10.457365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:05.021 [2024-11-29 12:06:10.472248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190eb328 00:22:05.021 [2024-11-29 12:06:10.473039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.021 [2024-11-29 12:06:10.473095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:05.021 [2024-11-29 12:06:10.488044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190eaef0 00:22:05.021 [2024-11-29 12:06:10.488588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.021 [2024-11-29 12:06:10.488636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:05.021 [2024-11-29 12:06:10.503386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190eaab8 00:22:05.021 [2024-11-29 12:06:10.504164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.021 [2024-11-29 12:06:10.504217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:05.021 [2024-11-29 12:06:10.518933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ea680 00:22:05.021 [2024-11-29 12:06:10.519643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.021 [2024-11-29 12:06:10.519686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:05.280 [2024-11-29 12:06:10.535054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190ea248 00:22:05.280 [2024-11-29 12:06:10.535605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.280 [2024-11-29 12:06:10.535647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:05.280 [2024-11-29 12:06:10.551166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e9e10 00:22:05.280 [2024-11-29 12:06:10.551721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.280 [2024-11-29 12:06:10.551766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:05.280 [2024-11-29 12:06:10.566605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e99d8 00:22:05.280 [2024-11-29 12:06:10.567054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.280 [2024-11-29 12:06:10.567098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:05.280 [2024-11-29 12:06:10.582120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e95a0 00:22:05.280 [2024-11-29 12:06:10.582628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.280 [2024-11-29 12:06:10.582677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:05.280 [2024-11-29 12:06:10.598228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e9168 00:22:05.280 [2024-11-29 12:06:10.598771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.280 [2024-11-29 12:06:10.598820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:05.280 [2024-11-29 12:06:10.614843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e8d30 00:22:05.280 [2024-11-29 12:06:10.615337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.280 [2024-11-29 12:06:10.615382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:05.281 [2024-11-29 12:06:10.631555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e88f8 00:22:05.281 [2024-11-29 12:06:10.632056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.281 [2024-11-29 12:06:10.632100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:05.281 [2024-11-29 12:06:10.648024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e84c0 00:22:05.281 [2024-11-29 12:06:10.648488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.281 [2024-11-29 12:06:10.648547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:05.281 [2024-11-29 12:06:10.664313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e8088 00:22:05.281 [2024-11-29 12:06:10.664799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.281 [2024-11-29 12:06:10.664851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:05.281 [2024-11-29 12:06:10.680203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e7c50 00:22:05.281 [2024-11-29 12:06:10.680715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.281 [2024-11-29 12:06:10.680761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:05.281 [2024-11-29 12:06:10.696524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e7818 00:22:05.281 [2024-11-29 12:06:10.696979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.281 [2024-11-29 12:06:10.697029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:05.281 [2024-11-29 12:06:10.712213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e73e0 00:22:05.281 [2024-11-29 12:06:10.712929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.281 [2024-11-29 12:06:10.712978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:05.281 [2024-11-29 12:06:10.727963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e6fa8 00:22:05.281 [2024-11-29 12:06:10.728335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.281 [2024-11-29 12:06:10.728377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:05.281 [2024-11-29 12:06:10.743927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e6b70 00:22:05.281 [2024-11-29 12:06:10.744345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.281 [2024-11-29 12:06:10.744387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:05.281 [2024-11-29 12:06:10.759813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e6738 00:22:05.281 [2024-11-29 12:06:10.760221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.281 [2024-11-29 12:06:10.760271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:05.281 [2024-11-29 12:06:10.776015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e6300 00:22:05.281 [2024-11-29 12:06:10.776389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.281 [2024-11-29 12:06:10.776433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.591 [2024-11-29 12:06:10.792332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e5ec8 00:22:05.591 [2024-11-29 12:06:10.792722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.591 [2024-11-29 12:06:10.792759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:05.591 [2024-11-29 12:06:10.808514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e5a90 00:22:05.591 [2024-11-29 12:06:10.808912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.591 [2024-11-29 12:06:10.808963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:05.591 [2024-11-29 12:06:10.824267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e5658 00:22:05.591 [2024-11-29 12:06:10.824636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.591 [2024-11-29 12:06:10.824686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:05.591 [2024-11-29 12:06:10.841152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e5220 00:22:05.591 [2024-11-29 12:06:10.841732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.591 [2024-11-29 12:06:10.841772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:05.591 [2024-11-29 12:06:10.857655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e4de8 00:22:05.591 [2024-11-29 12:06:10.857998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.591 [2024-11-29 12:06:10.858043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:05.591 [2024-11-29 12:06:10.873952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e49b0 00:22:05.591 [2024-11-29 12:06:10.874282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.591 [2024-11-29 12:06:10.874339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:05.591 [2024-11-29 12:06:10.890280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e4578 00:22:05.591 [2024-11-29 12:06:10.890666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.592 [2024-11-29 12:06:10.890707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:05.592 [2024-11-29 12:06:10.906596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e4140 00:22:05.592 [2024-11-29 12:06:10.906931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.592 [2024-11-29 12:06:10.906972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:05.592 [2024-11-29 12:06:10.922495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e3d08 00:22:05.592 [2024-11-29 12:06:10.922845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.592 [2024-11-29 12:06:10.922879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:05.592 [2024-11-29 12:06:10.938428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e38d0 00:22:05.592 [2024-11-29 12:06:10.938747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.592 [2024-11-29 12:06:10.938792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:05.592 [2024-11-29 12:06:10.954579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e3498 00:22:05.592 [2024-11-29 12:06:10.954906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.592 [2024-11-29 12:06:10.954947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:05.592 [2024-11-29 12:06:10.970838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e3060 00:22:05.592 [2024-11-29 12:06:10.971119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.592 [2024-11-29 12:06:10.971159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:05.592 [2024-11-29 12:06:10.986770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e2c28 00:22:05.592 [2024-11-29 12:06:10.987033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.592 [2024-11-29 12:06:10.987074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:05.592 [2024-11-29 12:06:11.002962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e27f0 00:22:05.592 [2024-11-29 12:06:11.003240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.592 [2024-11-29 12:06:11.003282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:05.592 [2024-11-29 12:06:11.018981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e23b8 00:22:05.592 [2024-11-29 12:06:11.019247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.592 [2024-11-29 12:06:11.019288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:05.592 [2024-11-29 12:06:11.035134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e1f80 00:22:05.592 [2024-11-29 12:06:11.035683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.592 [2024-11-29 12:06:11.035732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:05.592 [2024-11-29 12:06:11.051678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e1b48 00:22:05.592 [2024-11-29 12:06:11.051939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.592 [2024-11-29 12:06:11.051982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:05.592 [2024-11-29 12:06:11.067439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e1710 00:22:05.592 [2024-11-29 12:06:11.067694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.592 [2024-11-29 12:06:11.067744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:05.592 [2024-11-29 12:06:11.083264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e12d8 00:22:05.592 [2024-11-29 12:06:11.083452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.592 [2024-11-29 12:06:11.083499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:05.592 [2024-11-29 12:06:11.098918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e0ea0 00:22:05.592 [2024-11-29 12:06:11.099127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.592 [2024-11-29 12:06:11.099175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:05.851 [2024-11-29 12:06:11.114623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e0a68 00:22:05.851 [2024-11-29 12:06:11.114803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.851 [2024-11-29 12:06:11.114850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:05.851 [2024-11-29 12:06:11.130542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e0630 00:22:05.851 [2024-11-29 12:06:11.130739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.851 [2024-11-29 12:06:11.130783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:05.851 [2024-11-29 12:06:11.146787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190e01f8 00:22:05.851 [2024-11-29 12:06:11.146955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.851 [2024-11-29 12:06:11.147004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:05.851 [2024-11-29 12:06:11.163129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190dfdc0 00:22:05.851 [2024-11-29 12:06:11.163276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.851 [2024-11-29 12:06:11.163325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:05.851 [2024-11-29 12:06:11.179128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190df988 00:22:05.851 [2024-11-29 12:06:11.179256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.851 [2024-11-29 12:06:11.179299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:05.851 [2024-11-29 12:06:11.194992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190df550 00:22:05.851 [2024-11-29 12:06:11.195112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.851 [2024-11-29 12:06:11.195158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:05.851 [2024-11-29 12:06:11.211023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190df118 00:22:05.851 [2024-11-29 12:06:11.211145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.851 [2024-11-29 12:06:11.211179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:05.851 [2024-11-29 12:06:11.227268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190dece0 00:22:05.851 [2024-11-29 12:06:11.227369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.851 [2024-11-29 12:06:11.227408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:05.851 [2024-11-29 12:06:11.243631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1825160) with pdu=0x2000190de8a8 00:22:05.851 [2024-11-29 12:06:11.243730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.851 [2024-11-29 12:06:11.243786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:05.851 00:22:05.851 Latency(us) 00:22:05.851 [2024-11-29T12:06:11.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.851 [2024-11-29T12:06:11.362Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:05.851 nvme0n1 : 2.01 16018.23 62.57 0.00 0.00 7984.27 6434.44 22758.87 00:22:05.851 [2024-11-29T12:06:11.362Z] =================================================================================================================== 00:22:05.851 [2024-11-29T12:06:11.362Z] Total : 16018.23 62.57 0.00 0.00 7984.27 6434.44 22758.87 00:22:05.851 0 00:22:05.851 12:06:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:05.851 12:06:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:05.851 12:06:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:05.851 | .driver_specific 00:22:05.851 | .nvme_error 00:22:05.851 | .status_code 00:22:05.851 | .command_transient_transport_error' 00:22:05.851 12:06:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:06.110 12:06:11 -- host/digest.sh@71 -- # (( 126 > 0 )) 00:22:06.110 12:06:11 -- host/digest.sh@73 -- # killprocess 84574 00:22:06.110 12:06:11 -- common/autotest_common.sh@936 -- # '[' -z 84574 ']' 00:22:06.110 12:06:11 -- common/autotest_common.sh@940 -- # kill -0 84574 00:22:06.110 12:06:11 -- common/autotest_common.sh@941 -- # uname 00:22:06.110 12:06:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:06.110 12:06:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84574 00:22:06.110 killing process with pid 84574 00:22:06.110 Received shutdown signal, test time was about 2.000000 seconds 00:22:06.110 00:22:06.110 Latency(us) 00:22:06.110 [2024-11-29T12:06:11.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.110 [2024-11-29T12:06:11.621Z] =================================================================================================================== 00:22:06.110 [2024-11-29T12:06:11.621Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:06.110 12:06:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:06.110 12:06:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:06.110 12:06:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84574' 00:22:06.110 12:06:11 -- common/autotest_common.sh@955 -- # kill 84574 00:22:06.110 12:06:11 -- common/autotest_common.sh@960 -- # wait 84574 00:22:06.369 12:06:11 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:22:06.369 12:06:11 -- host/digest.sh@54 -- # local rw bs qd 00:22:06.369 12:06:11 -- host/digest.sh@56 -- # rw=randwrite 00:22:06.370 12:06:11 -- host/digest.sh@56 -- # bs=131072 00:22:06.370 12:06:11 -- host/digest.sh@56 -- # qd=16 00:22:06.370 12:06:11 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:06.370 12:06:11 -- host/digest.sh@58 -- # bperfpid=84634 00:22:06.370 12:06:11 -- host/digest.sh@60 -- # waitforlisten 84634 /var/tmp/bperf.sock 00:22:06.370 12:06:11 -- common/autotest_common.sh@829 -- # '[' -z 84634 ']' 00:22:06.370 12:06:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:06.370 12:06:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.370 12:06:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:06.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:06.370 12:06:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.370 12:06:11 -- common/autotest_common.sh@10 -- # set +x 00:22:06.370 [2024-11-29 12:06:11.849648] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:06.370 [2024-11-29 12:06:11.849990] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84634 ] 00:22:06.370 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:06.370 Zero copy mechanism will not be used. 00:22:06.629 [2024-11-29 12:06:11.985372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.629 [2024-11-29 12:06:12.082047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.565 12:06:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.565 12:06:12 -- common/autotest_common.sh@862 -- # return 0 00:22:07.565 12:06:12 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:07.565 12:06:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:07.565 12:06:13 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:07.565 12:06:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.565 12:06:13 -- common/autotest_common.sh@10 -- # set +x 00:22:07.565 12:06:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.565 12:06:13 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:07.565 12:06:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:08.131 nvme0n1 00:22:08.131 12:06:13 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:08.131 12:06:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.131 12:06:13 -- common/autotest_common.sh@10 -- # set +x 00:22:08.131 12:06:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.131 12:06:13 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:08.131 12:06:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:08.131 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:08.131 Zero copy mechanism will not be used. 00:22:08.131 Running I/O for 2 seconds... 00:22:08.131 [2024-11-29 12:06:13.552879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.131 [2024-11-29 12:06:13.553240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.131 [2024-11-29 12:06:13.553288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.131 [2024-11-29 12:06:13.558750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.131 [2024-11-29 12:06:13.559067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.131 [2024-11-29 12:06:13.559111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.131 [2024-11-29 12:06:13.564534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.131 [2024-11-29 12:06:13.564890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.131 [2024-11-29 12:06:13.564937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.131 [2024-11-29 12:06:13.570073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.131 [2024-11-29 12:06:13.570665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.131 [2024-11-29 12:06:13.570718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.131 [2024-11-29 12:06:13.575950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.131 [2024-11-29 12:06:13.576280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.131 [2024-11-29 12:06:13.576320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.131 [2024-11-29 12:06:13.581400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.131 [2024-11-29 12:06:13.581729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.131 [2024-11-29 12:06:13.581774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.131 [2024-11-29 12:06:13.586844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.131 [2024-11-29 12:06:13.587147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.131 [2024-11-29 12:06:13.587185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.131 [2024-11-29 12:06:13.592308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.131 [2024-11-29 12:06:13.592635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.131 [2024-11-29 12:06:13.592683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.132 [2024-11-29 12:06:13.597805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.132 [2024-11-29 12:06:13.598113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.132 [2024-11-29 12:06:13.598155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.132 [2024-11-29 12:06:13.603180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.132 [2024-11-29 12:06:13.603489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.132 [2024-11-29 12:06:13.603567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.132 [2024-11-29 12:06:13.608654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.132 [2024-11-29 12:06:13.608957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.132 [2024-11-29 12:06:13.608995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.132 [2024-11-29 12:06:13.614078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.132 [2024-11-29 12:06:13.614380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.132 [2024-11-29 12:06:13.614418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.132 [2024-11-29 12:06:13.619488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.132 [2024-11-29 12:06:13.619854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.132 [2024-11-29 12:06:13.619901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.132 [2024-11-29 12:06:13.625012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.132 [2024-11-29 12:06:13.625598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.132 [2024-11-29 12:06:13.625651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.132 [2024-11-29 12:06:13.630840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.132 [2024-11-29 12:06:13.631155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.132 [2024-11-29 12:06:13.631197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.132 [2024-11-29 12:06:13.636244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.132 [2024-11-29 12:06:13.636564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.132 [2024-11-29 12:06:13.636601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.390 [2024-11-29 12:06:13.641747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.642061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.642100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.647237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.647593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.647628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.652532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.652869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.652909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.657798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.658110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.658146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.663404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.663742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.663787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.669052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.669615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.669674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.674932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.675257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.675293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.680980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.681451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.681484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.687004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.687306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.687356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.692760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.693060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.693094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.698764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.699069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.699107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.704376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.704845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.704881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.709980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.710433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.710749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.716081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.716562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.716806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.721846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.722352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.722589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.727924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.728414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.728635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.733906] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.734400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.734635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.739841] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.740370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.740641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.745816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.746286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.746521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.751496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.752011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.752073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.756925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.757232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.757273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.762159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.762459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.762495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.767267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.767783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.767822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.772849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.773156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.773195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.778020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.778324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.778371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.783212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.783703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.783735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.788644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.788945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.788983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.793731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.794033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.794074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.798943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.799245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.799284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.804271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.804586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.804625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.809401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.809720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.809757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.814652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.814961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.814996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.820058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.820359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.820396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.825286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.825602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.825638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.830559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.831189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.831236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.836691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.837000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.837058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.842391] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.842882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.842932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.848635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.848974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.849016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.854434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.854906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.854941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.860252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.860578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.391 [2024-11-29 12:06:13.860623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.391 [2024-11-29 12:06:13.865985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.391 [2024-11-29 12:06:13.866311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.392 [2024-11-29 12:06:13.866352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.392 [2024-11-29 12:06:13.871425] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.392 [2024-11-29 12:06:13.871769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.392 [2024-11-29 12:06:13.871811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.392 [2024-11-29 12:06:13.876746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.392 [2024-11-29 12:06:13.877064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.392 [2024-11-29 12:06:13.877109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.392 [2024-11-29 12:06:13.882208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.392 [2024-11-29 12:06:13.882693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.392 [2024-11-29 12:06:13.882741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.392 [2024-11-29 12:06:13.887891] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.392 [2024-11-29 12:06:13.888204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.392 [2024-11-29 12:06:13.888253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.392 [2024-11-29 12:06:13.893552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.392 [2024-11-29 12:06:13.893899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.392 [2024-11-29 12:06:13.893948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.392 [2024-11-29 12:06:13.899083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.651 [2024-11-29 12:06:13.899389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.651 [2024-11-29 12:06:13.899436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.651 [2024-11-29 12:06:13.904481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.651 [2024-11-29 12:06:13.904838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.904882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.909772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.910080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.910125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.915288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.915660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.915712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.920884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.921187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.921226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.926254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.926585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.926635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.931490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.931969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.932029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.936634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.937136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.937188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.942087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.942266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.942311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.947303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.947421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.947457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.953421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.953569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.953614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.959140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.959279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.959329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.964796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.964896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.964931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.970560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.970741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.970777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.976111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.976481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.976534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.981896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.982011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.982044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.987691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.987794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.987829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.993345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.993453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.993487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:13.998840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:13.998942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:13.998978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:14.004396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:14.004689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:14.004722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:14.010207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:14.010305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:14.010349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:14.015679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:14.015791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:14.015834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:14.021093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:14.021200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:14.021231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:14.026470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:14.026610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:14.026643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:14.031657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:14.031766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:14.031797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.652 [2024-11-29 12:06:14.036840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.652 [2024-11-29 12:06:14.036938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.652 [2024-11-29 12:06:14.036971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.042053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.042359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.042392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.047357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.047458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.047490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.052577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.052675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.052708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.057696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.057814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.057847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.062974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.063076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.063108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.068160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.068273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.068306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.073467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.073582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.073617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.078592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.078694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.078727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.083710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.083814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.083846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.088862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.088960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.088992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.094001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.094112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.094144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.099174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.099274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.099306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.104402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.104502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.104568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.109366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.109775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.109800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.114810] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.114903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.114929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.120084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.120171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.120196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.125262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.125523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.125564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.130591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.130683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.130723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.135417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.135508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.135575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.140406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.140491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.140515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.145345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.145610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.145634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.150576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.150668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.150693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.653 [2024-11-29 12:06:14.155438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.653 [2024-11-29 12:06:14.155585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.653 [2024-11-29 12:06:14.155612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.914 [2024-11-29 12:06:14.160569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.914 [2024-11-29 12:06:14.160662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.914 [2024-11-29 12:06:14.160703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.914 [2024-11-29 12:06:14.165725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.914 [2024-11-29 12:06:14.165812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.914 [2024-11-29 12:06:14.165837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.914 [2024-11-29 12:06:14.170762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.914 [2024-11-29 12:06:14.170851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.914 [2024-11-29 12:06:14.170876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.914 [2024-11-29 12:06:14.175908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.914 [2024-11-29 12:06:14.176000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.914 [2024-11-29 12:06:14.176025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.914 [2024-11-29 12:06:14.180836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.914 [2024-11-29 12:06:14.180921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.914 [2024-11-29 12:06:14.180945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.914 [2024-11-29 12:06:14.185573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.914 [2024-11-29 12:06:14.185659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.914 [2024-11-29 12:06:14.185700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.914 [2024-11-29 12:06:14.190479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.914 [2024-11-29 12:06:14.190615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.914 [2024-11-29 12:06:14.190641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.914 [2024-11-29 12:06:14.195343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.914 [2024-11-29 12:06:14.195431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.914 [2024-11-29 12:06:14.195456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.914 [2024-11-29 12:06:14.200309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.200633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.200658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.205305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.205393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.205417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.210090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.210174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.210197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.214920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.215007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.215030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.219658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.219748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.219772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.224370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.224458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.224483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.229130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.229219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.229243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.233931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.234016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.234040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.238693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.238794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.238818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.243445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.243733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.243758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.248470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.248606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.248630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.253269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.253376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.253401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.258063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.258164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.258188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.262806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.262893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.262917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.267743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.267867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.267893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.272692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.272782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.272807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.277652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.277770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.277795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.282770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.282864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.282907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.287895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.287983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.288009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.292982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.293075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.293100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.298398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.298694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.298735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.304037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.304127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.304151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.309129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.309219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.309243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.314224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.314481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.314506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.319349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.319442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.319468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.324261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.324351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.324375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.329151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.915 [2024-11-29 12:06:14.329236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.915 [2024-11-29 12:06:14.329259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.915 [2024-11-29 12:06:14.333977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.334064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.334103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.338746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.338830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.338854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.343457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.343606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.343635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.348276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.348364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.348387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.353043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.353131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.353156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.357814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.357903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.357927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.362714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.362800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.362824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.367407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.367495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.367578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.372348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.372436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.372460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.377187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.377272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.377296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.382027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.382128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.382152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.386827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.386912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.386937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.391637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.391725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.391751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.396445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.396565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.396591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.401198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.401286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.401310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.406023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.406122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.406147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.410792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.410876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.410900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.415655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.415747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.415772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.916 [2024-11-29 12:06:14.420450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:08.916 [2024-11-29 12:06:14.420592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.916 [2024-11-29 12:06:14.420618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.425200] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.425285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.425309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.430113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.430198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.430221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.435158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.435245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.435268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.440633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.440885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.440924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.446243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.446660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.446730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.451924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.452050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.452088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.457325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.457419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.457450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.462688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.462793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.462820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.467788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.467916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.467942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.472881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.472969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.472995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.477839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.477931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.477957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.482957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.483048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.483074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.487994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.488083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.488108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.492981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.493066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.493091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.497968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.498055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.498098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.502890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.502981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.503006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.508041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.508127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.508152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.513265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.513363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.513392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.518446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.518809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.518847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.524337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.524456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.524497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.529846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.530150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.530188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.535546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.535667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.535704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.541005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.541322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.541359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.178 [2024-11-29 12:06:14.546479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.178 [2024-11-29 12:06:14.546600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.178 [2024-11-29 12:06:14.546630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.551715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.551810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.551840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.556998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.557108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.557136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.562470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.562749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.562989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.567961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.568085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.568113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.573250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.573479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.573504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.578601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.578697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.578723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.583910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.584006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.584063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.589212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.589462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.589486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.594507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.594630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.594657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.599640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.599735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.599762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.604951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.605062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.605089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.609994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.610078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.610102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.614700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.614784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.614809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.619571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.619668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.619696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.624371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.624466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.624492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.629182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.629594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.629621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.634478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.634610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.634641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.639431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.639581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.639610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.644576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.644705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.644747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.649505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.649606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.649633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.654156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.654245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.654271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.659054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.659142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.659167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.663907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.663992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.664016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.668536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.668662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.668686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.673206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.673611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.673637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.678313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.678401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.678425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.179 [2024-11-29 12:06:14.683013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.179 [2024-11-29 12:06:14.683101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.179 [2024-11-29 12:06:14.683124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.441 [2024-11-29 12:06:14.688096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.441 [2024-11-29 12:06:14.688182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.441 [2024-11-29 12:06:14.688212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.441 [2024-11-29 12:06:14.693061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.441 [2024-11-29 12:06:14.693431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.441 [2024-11-29 12:06:14.693460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.441 [2024-11-29 12:06:14.698634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.441 [2024-11-29 12:06:14.698723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.441 [2024-11-29 12:06:14.698750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.441 [2024-11-29 12:06:14.704081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.441 [2024-11-29 12:06:14.704203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.441 [2024-11-29 12:06:14.704248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.441 [2024-11-29 12:06:14.709421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.441 [2024-11-29 12:06:14.709660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.441 [2024-11-29 12:06:14.709735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.441 [2024-11-29 12:06:14.714830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.441 [2024-11-29 12:06:14.714916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.441 [2024-11-29 12:06:14.714942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.441 [2024-11-29 12:06:14.720040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.441 [2024-11-29 12:06:14.720126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.441 [2024-11-29 12:06:14.720151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.441 [2024-11-29 12:06:14.725071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.441 [2024-11-29 12:06:14.725157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.441 [2024-11-29 12:06:14.725182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.441 [2024-11-29 12:06:14.730047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.441 [2024-11-29 12:06:14.730283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.441 [2024-11-29 12:06:14.730479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.441 [2024-11-29 12:06:14.735418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.441 [2024-11-29 12:06:14.735509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.441 [2024-11-29 12:06:14.735581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.441 [2024-11-29 12:06:14.740508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.441 [2024-11-29 12:06:14.740637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.441 [2024-11-29 12:06:14.740663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.441 [2024-11-29 12:06:14.745373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.441 [2024-11-29 12:06:14.745607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.441 [2024-11-29 12:06:14.745632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.751104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.751212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.751238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.756378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.756469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.756494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.761479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.761762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.761797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.766769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.766861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.766887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.771754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.771847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.771872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.776463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.776592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.776618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.781120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.781372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.781398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.786190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.786282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.786306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.791094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.791183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.791208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.796022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.796108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.796132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.801221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.801469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.801496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.806519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.806626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.806651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.811685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.811784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.811826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.816800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.816889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.816914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.821834] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.821920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.821944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.826660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.826763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.826788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.831567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.831658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.831685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.836463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.836594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.836620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.841487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.841791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.841816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.846625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.846730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.846756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.851502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.851650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.851676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.856885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.856981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.857007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.862179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.862285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.862311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.867440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.867566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.867593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.872739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.872834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.872861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.877945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.878040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.442 [2024-11-29 12:06:14.878098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.442 [2024-11-29 12:06:14.883198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.442 [2024-11-29 12:06:14.883286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.443 [2024-11-29 12:06:14.883328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.443 [2024-11-29 12:06:14.888449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.443 [2024-11-29 12:06:14.888600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.443 [2024-11-29 12:06:14.888626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.443 [2024-11-29 12:06:14.893722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.443 [2024-11-29 12:06:14.893817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.443 [2024-11-29 12:06:14.893843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.443 [2024-11-29 12:06:14.898897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.443 [2024-11-29 12:06:14.898991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.443 [2024-11-29 12:06:14.899017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.443 [2024-11-29 12:06:14.904132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.443 [2024-11-29 12:06:14.904224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.443 [2024-11-29 12:06:14.904248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.443 [2024-11-29 12:06:14.909093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.443 [2024-11-29 12:06:14.909390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.443 [2024-11-29 12:06:14.909416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.443 [2024-11-29 12:06:14.914331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.443 [2024-11-29 12:06:14.914422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.443 [2024-11-29 12:06:14.914446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.443 [2024-11-29 12:06:14.919245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.443 [2024-11-29 12:06:14.919333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.443 [2024-11-29 12:06:14.919358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.443 [2024-11-29 12:06:14.924257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.443 [2024-11-29 12:06:14.924363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.443 [2024-11-29 12:06:14.924388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.443 [2024-11-29 12:06:14.929589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.443 [2024-11-29 12:06:14.929834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.443 [2024-11-29 12:06:14.929860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.443 [2024-11-29 12:06:14.934703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.443 [2024-11-29 12:06:14.934789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.443 [2024-11-29 12:06:14.934815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.443 [2024-11-29 12:06:14.939766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.443 [2024-11-29 12:06:14.939906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.443 [2024-11-29 12:06:14.939931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.443 [2024-11-29 12:06:14.944750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.443 [2024-11-29 12:06:14.944836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.443 [2024-11-29 12:06:14.944860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.703 [2024-11-29 12:06:14.949689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.703 [2024-11-29 12:06:14.949775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.703 [2024-11-29 12:06:14.949799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.703 [2024-11-29 12:06:14.954809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.703 [2024-11-29 12:06:14.954901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.703 [2024-11-29 12:06:14.954926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.703 [2024-11-29 12:06:14.960056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.703 [2024-11-29 12:06:14.960147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.703 [2024-11-29 12:06:14.960170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.703 [2024-11-29 12:06:14.965143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:14.965368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:14.965393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:14.970306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:14.970396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:14.970421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:14.975196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:14.975287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:14.975312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:14.980183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:14.980277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:14.980301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:14.985188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:14.985609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:14.985636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:14.990604] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:14.990692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:14.990716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:14.995499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:14.995627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:14.995654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.000619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.000714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.000754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.005497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.005626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.005651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.010469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.010587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.010612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.015341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.015426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.015450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.020403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.020797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.020823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.025665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.025768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.025792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.030542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.030647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.030671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.036105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.036345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.036370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.041201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.041290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.041316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.046205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.046303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.046329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.051601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.051695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.051721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.056774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.056860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.056884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.061624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.061741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.061782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.066697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.066783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.066806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.071297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.071383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.071407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.075989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.076075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.076098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.080565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.080652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.080692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.085066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.085154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.704 [2024-11-29 12:06:15.085177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.704 [2024-11-29 12:06:15.089715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.704 [2024-11-29 12:06:15.089819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.089844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.094659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.094758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.094782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.099622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.099742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.099767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.104749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.104834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.104858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.109474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.109603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.109627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.114216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.114303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.114344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.118900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.118986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.119009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.123512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.123631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.123655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.128234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.128320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.128342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.132859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.132947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.132970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.137458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.137575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.137600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.141952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.142040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.142063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.146600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.146690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.146732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.151501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.151630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.151655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.156754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.156854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.156878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.161962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.162051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.162074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.166811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.166898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.166922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.171459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.171607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.171634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.176281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.176403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.176428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.181115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.181200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.181224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.185806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.185895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.185921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.190456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.190832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.190857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.195457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.195586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.195611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.200126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.200211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.200235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.204724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.204805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.204828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.705 [2024-11-29 12:06:15.209443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.705 [2024-11-29 12:06:15.209528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.705 [2024-11-29 12:06:15.209599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.966 [2024-11-29 12:06:15.214389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.966 [2024-11-29 12:06:15.214662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.966 [2024-11-29 12:06:15.214686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.966 [2024-11-29 12:06:15.219598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.966 [2024-11-29 12:06:15.219683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.966 [2024-11-29 12:06:15.219708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.966 [2024-11-29 12:06:15.224771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.966 [2024-11-29 12:06:15.224853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.966 [2024-11-29 12:06:15.224892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.966 [2024-11-29 12:06:15.229523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.966 [2024-11-29 12:06:15.229617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.966 [2024-11-29 12:06:15.229640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.966 [2024-11-29 12:06:15.234143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.966 [2024-11-29 12:06:15.234223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.966 [2024-11-29 12:06:15.234246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.966 [2024-11-29 12:06:15.238803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.966 [2024-11-29 12:06:15.238884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.966 [2024-11-29 12:06:15.238907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.966 [2024-11-29 12:06:15.243399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.966 [2024-11-29 12:06:15.243479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.966 [2024-11-29 12:06:15.243502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.966 [2024-11-29 12:06:15.248123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.966 [2024-11-29 12:06:15.248204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.248228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.252629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.252709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.252732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.257169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.257248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.257271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.261937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.262015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.262039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.266890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.266970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.266994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.271911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.271992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.272016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.277038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.277120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.277144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.281773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.281853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.281876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.286358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.286459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.286483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.291140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.291224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.291248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.295857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.295940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.295964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.300476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.300594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.300618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.305140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.305221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.305245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.309874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.309965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.309989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.314789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.314875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.314898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.319781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.319898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.319921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.324580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.324662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.324686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.329651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.329780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.329804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.334918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.335002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.335027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.339895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.339974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.339998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.344860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.344941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.344965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.349450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.349543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.349567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.354720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.354806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.354833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.360003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.360154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.360192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.365204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.365291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.365358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.967 [2024-11-29 12:06:15.370091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.967 [2024-11-29 12:06:15.370225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.967 [2024-11-29 12:06:15.370256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.374938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.375033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.375065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.379586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.379813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.379870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.384486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.384623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.384678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.389397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.389490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.389523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.394163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.394396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.394427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.399002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.399220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.399252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.403909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.404112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.404144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.408652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.408814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.408847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.413394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.413627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.413660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.418283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.418391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.418421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.423759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.423882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.423917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.429013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.429108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.429135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.434325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.434430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.434457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.439560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.439657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.439685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.444676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.444758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.444783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.449706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.449823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.449848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.454581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.454665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.454705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.459474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.459592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.459635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.464382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.464465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.464490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:09.968 [2024-11-29 12:06:15.469729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:09.968 [2024-11-29 12:06:15.469808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.968 [2024-11-29 12:06:15.469837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:10.227 [2024-11-29 12:06:15.474899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:10.227 [2024-11-29 12:06:15.474983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.227 [2024-11-29 12:06:15.475008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:10.227 [2024-11-29 12:06:15.480170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:10.227 [2024-11-29 12:06:15.480254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.227 [2024-11-29 12:06:15.480279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:10.227 [2024-11-29 12:06:15.485368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:10.227 [2024-11-29 12:06:15.485453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.227 [2024-11-29 12:06:15.485480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:10.227 [2024-11-29 12:06:15.490270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:10.227 [2024-11-29 12:06:15.490353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.227 [2024-11-29 12:06:15.490377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:10.227 [2024-11-29 12:06:15.495007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:10.227 [2024-11-29 12:06:15.495090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.227 [2024-11-29 12:06:15.495114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:10.227 [2024-11-29 12:06:15.499786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:10.227 [2024-11-29 12:06:15.499882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.227 [2024-11-29 12:06:15.499907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:10.227 [2024-11-29 12:06:15.504802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:10.227 [2024-11-29 12:06:15.504940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.227 [2024-11-29 12:06:15.504973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:10.227 [2024-11-29 12:06:15.510096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:10.228 [2024-11-29 12:06:15.510174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.228 [2024-11-29 12:06:15.510200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:10.228 [2024-11-29 12:06:15.515297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:10.228 [2024-11-29 12:06:15.515397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.228 [2024-11-29 12:06:15.515423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:10.228 [2024-11-29 12:06:15.520453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:10.228 [2024-11-29 12:06:15.520541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.228 [2024-11-29 12:06:15.520568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:10.228 [2024-11-29 12:06:15.525455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:10.228 [2024-11-29 12:06:15.525535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.228 [2024-11-29 12:06:15.525576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:10.228 [2024-11-29 12:06:15.530185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:10.228 [2024-11-29 12:06:15.530267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.228 [2024-11-29 12:06:15.530292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:10.228 [2024-11-29 12:06:15.534931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:10.228 [2024-11-29 12:06:15.535008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.228 [2024-11-29 12:06:15.535031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:10.228 [2024-11-29 12:06:15.539722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1823c90) with pdu=0x2000190fef90 00:22:10.228 [2024-11-29 12:06:15.539808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.228 [2024-11-29 12:06:15.539834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:10.228 00:22:10.228 Latency(us) 00:22:10.228 [2024-11-29T12:06:15.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.228 [2024-11-29T12:06:15.739Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:10.228 nvme0n1 : 2.00 6020.06 752.51 0.00 0.00 2651.33 1817.13 8996.31 00:22:10.228 [2024-11-29T12:06:15.739Z] =================================================================================================================== 00:22:10.228 [2024-11-29T12:06:15.739Z] Total : 6020.06 752.51 0.00 0.00 2651.33 1817.13 8996.31 00:22:10.228 0 00:22:10.228 12:06:15 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:10.228 12:06:15 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:10.228 12:06:15 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:10.228 | .driver_specific 00:22:10.228 | .nvme_error 00:22:10.228 | .status_code 00:22:10.228 | .command_transient_transport_error' 00:22:10.228 12:06:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:10.487 12:06:15 -- host/digest.sh@71 -- # (( 388 > 0 )) 00:22:10.487 12:06:15 -- host/digest.sh@73 -- # killprocess 84634 00:22:10.487 12:06:15 -- common/autotest_common.sh@936 -- # '[' -z 84634 ']' 00:22:10.487 12:06:15 -- common/autotest_common.sh@940 -- # kill -0 84634 00:22:10.487 12:06:15 -- common/autotest_common.sh@941 -- # uname 00:22:10.487 12:06:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:10.487 12:06:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84634 00:22:10.487 12:06:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:10.487 killing process with pid 84634 00:22:10.487 12:06:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:10.487 12:06:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84634' 00:22:10.487 12:06:15 -- common/autotest_common.sh@955 -- # kill 84634 00:22:10.487 Received shutdown signal, test time was about 2.000000 seconds 00:22:10.487 00:22:10.487 Latency(us) 00:22:10.487 [2024-11-29T12:06:15.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.487 [2024-11-29T12:06:15.998Z] =================================================================================================================== 00:22:10.487 [2024-11-29T12:06:15.998Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:10.487 12:06:15 -- common/autotest_common.sh@960 -- # wait 84634 00:22:10.746 12:06:16 -- host/digest.sh@115 -- # killprocess 84419 00:22:10.746 12:06:16 -- common/autotest_common.sh@936 -- # '[' -z 84419 ']' 00:22:10.746 12:06:16 -- common/autotest_common.sh@940 -- # kill -0 84419 00:22:10.746 12:06:16 -- common/autotest_common.sh@941 -- # uname 00:22:10.746 12:06:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:10.746 12:06:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84419 00:22:10.746 12:06:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:10.746 killing process with pid 84419 00:22:10.746 12:06:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:10.746 12:06:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84419' 00:22:10.746 12:06:16 -- common/autotest_common.sh@955 -- # kill 84419 00:22:10.746 12:06:16 -- common/autotest_common.sh@960 -- # wait 84419 00:22:11.315 00:22:11.315 real 0m19.081s 00:22:11.315 user 0m36.546s 00:22:11.315 sys 0m5.321s 00:22:11.315 12:06:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:11.315 12:06:16 -- common/autotest_common.sh@10 -- # set +x 00:22:11.315 ************************************ 00:22:11.315 END TEST nvmf_digest_error 00:22:11.315 ************************************ 00:22:11.315 12:06:16 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:22:11.315 12:06:16 -- host/digest.sh@139 -- # nvmftestfini 00:22:11.315 12:06:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:11.315 12:06:16 -- nvmf/common.sh@116 -- # sync 00:22:11.315 12:06:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:11.315 12:06:16 -- nvmf/common.sh@119 -- # set +e 00:22:11.315 12:06:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:11.315 12:06:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:11.315 rmmod nvme_tcp 00:22:11.315 rmmod nvme_fabrics 00:22:11.315 rmmod nvme_keyring 00:22:11.315 12:06:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:11.315 12:06:16 -- nvmf/common.sh@123 -- # set -e 00:22:11.315 12:06:16 -- nvmf/common.sh@124 -- # return 0 00:22:11.315 12:06:16 -- nvmf/common.sh@477 -- # '[' -n 84419 ']' 00:22:11.315 12:06:16 -- nvmf/common.sh@478 -- # killprocess 84419 00:22:11.315 12:06:16 -- common/autotest_common.sh@936 -- # '[' -z 84419 ']' 00:22:11.315 12:06:16 -- common/autotest_common.sh@940 -- # kill -0 84419 00:22:11.315 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (84419) - No such process 00:22:11.315 Process with pid 84419 is not found 00:22:11.315 12:06:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 84419 is not found' 00:22:11.315 12:06:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:11.315 12:06:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:11.315 12:06:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:11.315 12:06:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.315 12:06:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:11.315 12:06:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.315 12:06:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.315 12:06:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.315 12:06:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:11.315 00:22:11.315 real 0m38.442s 00:22:11.315 user 1m12.083s 00:22:11.315 sys 0m11.113s 00:22:11.315 12:06:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:11.315 ************************************ 00:22:11.315 END TEST nvmf_digest 00:22:11.315 ************************************ 00:22:11.315 12:06:16 -- common/autotest_common.sh@10 -- # set +x 00:22:11.315 12:06:16 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:22:11.315 12:06:16 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:22:11.315 12:06:16 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:11.315 12:06:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:11.315 12:06:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:11.315 12:06:16 -- common/autotest_common.sh@10 -- # set +x 00:22:11.315 ************************************ 00:22:11.315 START TEST nvmf_multipath 00:22:11.315 ************************************ 00:22:11.315 12:06:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:11.579 * Looking for test storage... 00:22:11.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:11.579 12:06:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:11.580 12:06:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:11.580 12:06:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:11.580 12:06:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:11.580 12:06:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:11.580 12:06:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:11.580 12:06:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:11.580 12:06:16 -- scripts/common.sh@335 -- # IFS=.-: 00:22:11.580 12:06:16 -- scripts/common.sh@335 -- # read -ra ver1 00:22:11.580 12:06:16 -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.580 12:06:16 -- scripts/common.sh@336 -- # read -ra ver2 00:22:11.580 12:06:16 -- scripts/common.sh@337 -- # local 'op=<' 00:22:11.580 12:06:16 -- scripts/common.sh@339 -- # ver1_l=2 00:22:11.580 12:06:16 -- scripts/common.sh@340 -- # ver2_l=1 00:22:11.580 12:06:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:11.580 12:06:16 -- scripts/common.sh@343 -- # case "$op" in 00:22:11.580 12:06:16 -- scripts/common.sh@344 -- # : 1 00:22:11.580 12:06:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:11.580 12:06:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.580 12:06:16 -- scripts/common.sh@364 -- # decimal 1 00:22:11.580 12:06:16 -- scripts/common.sh@352 -- # local d=1 00:22:11.580 12:06:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.580 12:06:16 -- scripts/common.sh@354 -- # echo 1 00:22:11.580 12:06:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:11.580 12:06:16 -- scripts/common.sh@365 -- # decimal 2 00:22:11.580 12:06:16 -- scripts/common.sh@352 -- # local d=2 00:22:11.580 12:06:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.580 12:06:16 -- scripts/common.sh@354 -- # echo 2 00:22:11.580 12:06:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:11.580 12:06:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:11.580 12:06:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:11.580 12:06:16 -- scripts/common.sh@367 -- # return 0 00:22:11.580 12:06:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.580 12:06:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:11.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.580 --rc genhtml_branch_coverage=1 00:22:11.580 --rc genhtml_function_coverage=1 00:22:11.580 --rc genhtml_legend=1 00:22:11.580 --rc geninfo_all_blocks=1 00:22:11.580 --rc geninfo_unexecuted_blocks=1 00:22:11.580 00:22:11.580 ' 00:22:11.580 12:06:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:11.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.580 --rc genhtml_branch_coverage=1 00:22:11.580 --rc genhtml_function_coverage=1 00:22:11.580 --rc genhtml_legend=1 00:22:11.580 --rc geninfo_all_blocks=1 00:22:11.580 --rc geninfo_unexecuted_blocks=1 00:22:11.580 00:22:11.580 ' 00:22:11.580 12:06:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:11.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.580 --rc genhtml_branch_coverage=1 00:22:11.580 --rc genhtml_function_coverage=1 00:22:11.580 --rc genhtml_legend=1 00:22:11.580 --rc geninfo_all_blocks=1 00:22:11.580 --rc geninfo_unexecuted_blocks=1 00:22:11.580 00:22:11.580 ' 00:22:11.580 12:06:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:11.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.580 --rc genhtml_branch_coverage=1 00:22:11.580 --rc genhtml_function_coverage=1 00:22:11.580 --rc genhtml_legend=1 00:22:11.580 --rc geninfo_all_blocks=1 00:22:11.580 --rc geninfo_unexecuted_blocks=1 00:22:11.580 00:22:11.580 ' 00:22:11.580 12:06:16 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:11.580 12:06:16 -- nvmf/common.sh@7 -- # uname -s 00:22:11.580 12:06:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.580 12:06:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.580 12:06:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.580 12:06:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.580 12:06:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.580 12:06:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.580 12:06:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.580 12:06:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.580 12:06:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.580 12:06:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.580 12:06:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:22:11.580 12:06:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:22:11.580 12:06:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.580 12:06:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.580 12:06:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:11.580 12:06:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:11.580 12:06:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.580 12:06:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.580 12:06:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.580 12:06:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.580 12:06:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.580 12:06:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.580 12:06:16 -- paths/export.sh@5 -- # export PATH 00:22:11.580 12:06:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.580 12:06:16 -- nvmf/common.sh@46 -- # : 0 00:22:11.580 12:06:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:11.580 12:06:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:11.580 12:06:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:11.580 12:06:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.580 12:06:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.580 12:06:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:11.580 12:06:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:11.580 12:06:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:11.580 12:06:16 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:11.580 12:06:16 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:11.580 12:06:16 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:11.580 12:06:16 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:11.580 12:06:16 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:11.580 12:06:16 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:11.580 12:06:16 -- host/multipath.sh@30 -- # nvmftestinit 00:22:11.580 12:06:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:11.580 12:06:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.580 12:06:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:11.580 12:06:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:11.580 12:06:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:11.580 12:06:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.580 12:06:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.580 12:06:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.580 12:06:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:11.580 12:06:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:11.580 12:06:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:11.580 12:06:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:11.580 12:06:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:11.580 12:06:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:11.580 12:06:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.580 12:06:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.580 12:06:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:11.580 12:06:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:11.580 12:06:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:11.580 12:06:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:11.580 12:06:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:11.580 12:06:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.580 12:06:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:11.580 12:06:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:11.580 12:06:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:11.580 12:06:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:11.580 12:06:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:11.580 12:06:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:11.580 Cannot find device "nvmf_tgt_br" 00:22:11.580 12:06:17 -- nvmf/common.sh@154 -- # true 00:22:11.580 12:06:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:11.580 Cannot find device "nvmf_tgt_br2" 00:22:11.580 12:06:17 -- nvmf/common.sh@155 -- # true 00:22:11.580 12:06:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:11.581 12:06:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:11.581 Cannot find device "nvmf_tgt_br" 00:22:11.581 12:06:17 -- nvmf/common.sh@157 -- # true 00:22:11.581 12:06:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:11.581 Cannot find device "nvmf_tgt_br2" 00:22:11.581 12:06:17 -- nvmf/common.sh@158 -- # true 00:22:11.581 12:06:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:11.839 12:06:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:11.839 12:06:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:11.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:11.839 12:06:17 -- nvmf/common.sh@161 -- # true 00:22:11.839 12:06:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:11.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:11.839 12:06:17 -- nvmf/common.sh@162 -- # true 00:22:11.839 12:06:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:11.839 12:06:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:11.839 12:06:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:11.839 12:06:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:11.839 12:06:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:11.839 12:06:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:11.839 12:06:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:11.839 12:06:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:11.839 12:06:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:11.839 12:06:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:11.839 12:06:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:11.839 12:06:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:11.839 12:06:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:11.839 12:06:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:11.839 12:06:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:11.839 12:06:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:11.839 12:06:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:11.839 12:06:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:11.839 12:06:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:11.839 12:06:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:11.839 12:06:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:11.839 12:06:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:11.839 12:06:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:11.839 12:06:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:11.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:22:11.839 00:22:11.839 --- 10.0.0.2 ping statistics --- 00:22:11.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.839 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:22:11.839 12:06:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:11.839 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:11.839 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:22:11.839 00:22:11.839 --- 10.0.0.3 ping statistics --- 00:22:11.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.839 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:22:11.839 12:06:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:11.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:22:11.839 00:22:11.839 --- 10.0.0.1 ping statistics --- 00:22:11.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.839 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:22:11.839 12:06:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.839 12:06:17 -- nvmf/common.sh@421 -- # return 0 00:22:11.839 12:06:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:11.839 12:06:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.839 12:06:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:11.839 12:06:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:11.839 12:06:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.839 12:06:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:11.839 12:06:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:12.098 12:06:17 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:22:12.098 12:06:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:12.098 12:06:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:12.098 12:06:17 -- common/autotest_common.sh@10 -- # set +x 00:22:12.098 12:06:17 -- nvmf/common.sh@469 -- # nvmfpid=84915 00:22:12.098 12:06:17 -- nvmf/common.sh@470 -- # waitforlisten 84915 00:22:12.098 12:06:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:12.098 12:06:17 -- common/autotest_common.sh@829 -- # '[' -z 84915 ']' 00:22:12.098 12:06:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.098 12:06:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.098 12:06:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.098 12:06:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.098 12:06:17 -- common/autotest_common.sh@10 -- # set +x 00:22:12.098 [2024-11-29 12:06:17.406760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:12.098 [2024-11-29 12:06:17.406903] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.098 [2024-11-29 12:06:17.548044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:12.357 [2024-11-29 12:06:17.683020] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:12.357 [2024-11-29 12:06:17.683502] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.357 [2024-11-29 12:06:17.683677] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.357 [2024-11-29 12:06:17.683763] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.357 [2024-11-29 12:06:17.684009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.357 [2024-11-29 12:06:17.684024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.922 12:06:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.922 12:06:18 -- common/autotest_common.sh@862 -- # return 0 00:22:12.922 12:06:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:12.922 12:06:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:12.922 12:06:18 -- common/autotest_common.sh@10 -- # set +x 00:22:13.181 12:06:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.181 12:06:18 -- host/multipath.sh@33 -- # nvmfapp_pid=84915 00:22:13.181 12:06:18 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:13.439 [2024-11-29 12:06:18.716342] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.439 12:06:18 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:13.699 Malloc0 00:22:13.699 12:06:19 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:13.958 12:06:19 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:14.217 12:06:19 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.475 [2024-11-29 12:06:19.841572] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.475 12:06:19 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:14.734 [2024-11-29 12:06:20.093737] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:14.734 12:06:20 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:14.734 12:06:20 -- host/multipath.sh@44 -- # bdevperf_pid=84966 00:22:14.734 12:06:20 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:14.734 12:06:20 -- host/multipath.sh@47 -- # waitforlisten 84966 /var/tmp/bdevperf.sock 00:22:14.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:14.734 12:06:20 -- common/autotest_common.sh@829 -- # '[' -z 84966 ']' 00:22:14.734 12:06:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:14.734 12:06:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:14.734 12:06:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:14.734 12:06:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:14.734 12:06:20 -- common/autotest_common.sh@10 -- # set +x 00:22:15.672 12:06:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.672 12:06:21 -- common/autotest_common.sh@862 -- # return 0 00:22:15.672 12:06:21 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:16.238 12:06:21 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:16.496 Nvme0n1 00:22:16.496 12:06:21 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:16.754 Nvme0n1 00:22:16.754 12:06:22 -- host/multipath.sh@78 -- # sleep 1 00:22:16.754 12:06:22 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:17.707 12:06:23 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:22:17.707 12:06:23 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:17.974 12:06:23 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:18.233 12:06:23 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:22:18.233 12:06:23 -- host/multipath.sh@65 -- # dtrace_pid=85017 00:22:18.233 12:06:23 -- host/multipath.sh@66 -- # sleep 6 00:22:18.233 12:06:23 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84915 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:24.796 12:06:29 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:24.796 12:06:29 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:24.796 12:06:29 -- host/multipath.sh@67 -- # active_port=4421 00:22:24.796 12:06:29 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:24.796 Attaching 4 probes... 00:22:24.796 @path[10.0.0.2, 4421]: 18175 00:22:24.796 @path[10.0.0.2, 4421]: 18648 00:22:24.796 @path[10.0.0.2, 4421]: 18765 00:22:24.796 @path[10.0.0.2, 4421]: 19224 00:22:24.796 @path[10.0.0.2, 4421]: 18729 00:22:24.796 12:06:30 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:24.796 12:06:30 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:24.796 12:06:30 -- host/multipath.sh@69 -- # sed -n 1p 00:22:24.796 12:06:30 -- host/multipath.sh@69 -- # port=4421 00:22:24.796 12:06:30 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:24.796 12:06:30 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:24.796 12:06:30 -- host/multipath.sh@72 -- # kill 85017 00:22:24.796 12:06:30 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:24.796 12:06:30 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:22:24.796 12:06:30 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:25.054 12:06:30 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:25.312 12:06:30 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:22:25.312 12:06:30 -- host/multipath.sh@65 -- # dtrace_pid=85131 00:22:25.312 12:06:30 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84915 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:25.312 12:06:30 -- host/multipath.sh@66 -- # sleep 6 00:22:31.875 12:06:36 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:31.875 12:06:36 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:31.875 12:06:36 -- host/multipath.sh@67 -- # active_port=4420 00:22:31.875 12:06:36 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:31.875 Attaching 4 probes... 00:22:31.875 @path[10.0.0.2, 4420]: 19004 00:22:31.875 @path[10.0.0.2, 4420]: 19601 00:22:31.875 @path[10.0.0.2, 4420]: 19493 00:22:31.875 @path[10.0.0.2, 4420]: 20411 00:22:31.875 @path[10.0.0.2, 4420]: 17306 00:22:31.875 12:06:36 -- host/multipath.sh@69 -- # sed -n 1p 00:22:31.875 12:06:36 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:31.875 12:06:36 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:31.875 12:06:36 -- host/multipath.sh@69 -- # port=4420 00:22:31.875 12:06:36 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:31.875 12:06:36 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:31.875 12:06:36 -- host/multipath.sh@72 -- # kill 85131 00:22:31.875 12:06:36 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:31.875 12:06:36 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:31.875 12:06:36 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:31.875 12:06:37 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:32.134 12:06:37 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:32.134 12:06:37 -- host/multipath.sh@65 -- # dtrace_pid=85247 00:22:32.134 12:06:37 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84915 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:32.134 12:06:37 -- host/multipath.sh@66 -- # sleep 6 00:22:38.730 12:06:43 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:38.730 12:06:43 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:38.730 12:06:43 -- host/multipath.sh@67 -- # active_port=4421 00:22:38.730 12:06:43 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:38.730 Attaching 4 probes... 00:22:38.730 @path[10.0.0.2, 4421]: 15312 00:22:38.730 @path[10.0.0.2, 4421]: 18348 00:22:38.730 @path[10.0.0.2, 4421]: 18749 00:22:38.730 @path[10.0.0.2, 4421]: 18762 00:22:38.730 @path[10.0.0.2, 4421]: 18838 00:22:38.730 12:06:43 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:38.730 12:06:43 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:38.730 12:06:43 -- host/multipath.sh@69 -- # sed -n 1p 00:22:38.730 12:06:43 -- host/multipath.sh@69 -- # port=4421 00:22:38.730 12:06:43 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:38.730 12:06:43 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:38.730 12:06:43 -- host/multipath.sh@72 -- # kill 85247 00:22:38.730 12:06:43 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:38.730 12:06:43 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:38.730 12:06:43 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:38.730 12:06:44 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:38.988 12:06:44 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:38.988 12:06:44 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84915 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:38.988 12:06:44 -- host/multipath.sh@65 -- # dtrace_pid=85361 00:22:38.988 12:06:44 -- host/multipath.sh@66 -- # sleep 6 00:22:45.552 12:06:50 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:45.552 12:06:50 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:45.552 12:06:50 -- host/multipath.sh@67 -- # active_port= 00:22:45.552 12:06:50 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:45.552 Attaching 4 probes... 00:22:45.552 00:22:45.552 00:22:45.552 00:22:45.552 00:22:45.552 00:22:45.552 12:06:50 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:45.552 12:06:50 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:45.552 12:06:50 -- host/multipath.sh@69 -- # sed -n 1p 00:22:45.552 12:06:50 -- host/multipath.sh@69 -- # port= 00:22:45.552 12:06:50 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:45.552 12:06:50 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:45.552 12:06:50 -- host/multipath.sh@72 -- # kill 85361 00:22:45.552 12:06:50 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:45.552 12:06:50 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:45.552 12:06:50 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:45.552 12:06:50 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:45.812 12:06:51 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:45.812 12:06:51 -- host/multipath.sh@65 -- # dtrace_pid=85479 00:22:45.812 12:06:51 -- host/multipath.sh@66 -- # sleep 6 00:22:45.812 12:06:51 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84915 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:52.397 12:06:57 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:52.397 12:06:57 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:52.397 12:06:57 -- host/multipath.sh@67 -- # active_port=4421 00:22:52.397 12:06:57 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:52.397 Attaching 4 probes... 00:22:52.397 @path[10.0.0.2, 4421]: 16372 00:22:52.397 @path[10.0.0.2, 4421]: 16748 00:22:52.397 @path[10.0.0.2, 4421]: 17703 00:22:52.397 @path[10.0.0.2, 4421]: 17145 00:22:52.397 @path[10.0.0.2, 4421]: 17585 00:22:52.397 12:06:57 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:52.397 12:06:57 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:52.397 12:06:57 -- host/multipath.sh@69 -- # sed -n 1p 00:22:52.397 12:06:57 -- host/multipath.sh@69 -- # port=4421 00:22:52.397 12:06:57 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:52.397 12:06:57 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:52.397 12:06:57 -- host/multipath.sh@72 -- # kill 85479 00:22:52.397 12:06:57 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:52.397 12:06:57 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:52.397 [2024-11-29 12:06:57.792534] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792735] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792743] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792751] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792792] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792816] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792824] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792833] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792897] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792905] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 [2024-11-29 12:06:57.792922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d7a0 is same with the state(5) to be set 00:22:52.397 12:06:57 -- host/multipath.sh@101 -- # sleep 1 00:22:53.334 12:06:58 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:53.334 12:06:58 -- host/multipath.sh@65 -- # dtrace_pid=85601 00:22:53.334 12:06:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84915 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:53.334 12:06:58 -- host/multipath.sh@66 -- # sleep 6 00:22:59.900 12:07:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:59.900 12:07:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:59.900 12:07:05 -- host/multipath.sh@67 -- # active_port=4420 00:22:59.900 12:07:05 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:59.900 Attaching 4 probes... 00:22:59.900 @path[10.0.0.2, 4420]: 15619 00:22:59.900 @path[10.0.0.2, 4420]: 15824 00:22:59.900 @path[10.0.0.2, 4420]: 15983 00:22:59.900 @path[10.0.0.2, 4420]: 16110 00:22:59.900 @path[10.0.0.2, 4420]: 15976 00:22:59.900 12:07:05 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:59.900 12:07:05 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:59.900 12:07:05 -- host/multipath.sh@69 -- # sed -n 1p 00:22:59.900 12:07:05 -- host/multipath.sh@69 -- # port=4420 00:22:59.900 12:07:05 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:59.900 12:07:05 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:59.900 12:07:05 -- host/multipath.sh@72 -- # kill 85601 00:22:59.900 12:07:05 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:59.900 12:07:05 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:59.900 [2024-11-29 12:07:05.332544] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:59.900 12:07:05 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:00.158 12:07:05 -- host/multipath.sh@111 -- # sleep 6 00:23:06.725 12:07:11 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:23:06.725 12:07:11 -- host/multipath.sh@65 -- # dtrace_pid=85777 00:23:06.725 12:07:11 -- host/multipath.sh@66 -- # sleep 6 00:23:06.725 12:07:11 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84915 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:13.324 12:07:17 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:13.324 12:07:17 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:13.324 12:07:17 -- host/multipath.sh@67 -- # active_port=4421 00:23:13.324 12:07:17 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:13.324 Attaching 4 probes... 00:23:13.324 @path[10.0.0.2, 4421]: 17056 00:23:13.324 @path[10.0.0.2, 4421]: 17093 00:23:13.324 @path[10.0.0.2, 4421]: 16481 00:23:13.324 @path[10.0.0.2, 4421]: 16741 00:23:13.324 @path[10.0.0.2, 4421]: 17047 00:23:13.324 12:07:17 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:13.324 12:07:17 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:13.324 12:07:17 -- host/multipath.sh@69 -- # sed -n 1p 00:23:13.324 12:07:17 -- host/multipath.sh@69 -- # port=4421 00:23:13.324 12:07:17 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:13.324 12:07:17 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:13.324 12:07:17 -- host/multipath.sh@72 -- # kill 85777 00:23:13.324 12:07:17 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:13.324 12:07:17 -- host/multipath.sh@114 -- # killprocess 84966 00:23:13.324 12:07:17 -- common/autotest_common.sh@936 -- # '[' -z 84966 ']' 00:23:13.324 12:07:17 -- common/autotest_common.sh@940 -- # kill -0 84966 00:23:13.324 12:07:17 -- common/autotest_common.sh@941 -- # uname 00:23:13.324 12:07:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:13.324 12:07:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84966 00:23:13.324 12:07:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:13.324 killing process with pid 84966 00:23:13.324 12:07:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:13.324 12:07:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84966' 00:23:13.324 12:07:18 -- common/autotest_common.sh@955 -- # kill 84966 00:23:13.324 12:07:18 -- common/autotest_common.sh@960 -- # wait 84966 00:23:13.324 Connection closed with partial response: 00:23:13.324 00:23:13.324 00:23:13.324 12:07:18 -- host/multipath.sh@116 -- # wait 84966 00:23:13.324 12:07:18 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:13.324 [2024-11-29 12:06:20.157418] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:13.324 [2024-11-29 12:06:20.157541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84966 ] 00:23:13.324 [2024-11-29 12:06:20.293788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.324 [2024-11-29 12:06:20.424342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.324 Running I/O for 90 seconds... 00:23:13.324 [2024-11-29 12:06:30.571317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.324 [2024-11-29 12:06:30.571432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:13.324 [2024-11-29 12:06:30.571489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.324 [2024-11-29 12:06:30.571547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:13.324 [2024-11-29 12:06:30.571577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.324 [2024-11-29 12:06:30.571592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:13.324 [2024-11-29 12:06:30.571613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.324 [2024-11-29 12:06:30.571627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:13.324 [2024-11-29 12:06:30.571648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.324 [2024-11-29 12:06:30.571661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:13.324 [2024-11-29 12:06:30.571681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.324 [2024-11-29 12:06:30.571696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:13.324 [2024-11-29 12:06:30.571716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.324 [2024-11-29 12:06:30.571731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:13.324 [2024-11-29 12:06:30.571751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.325 [2024-11-29 12:06:30.571766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.571788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.571802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.571822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.571836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.571871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.571902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.571925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.325 [2024-11-29 12:06:30.571940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.571959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.325 [2024-11-29 12:06:30.571973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.571993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.325 [2024-11-29 12:06:30.572041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.325 [2024-11-29 12:06:30.572188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.325 [2024-11-29 12:06:30.572225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.325 [2024-11-29 12:06:30.572291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.325 [2024-11-29 12:06:30.572752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.325 [2024-11-29 12:06:30.572789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.325 [2024-11-29 12:06:30.572859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.325 [2024-11-29 12:06:30.572937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.572972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.572992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.325 [2024-11-29 12:06:30.573006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.573026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.573041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.573075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.573090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.573114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.325 [2024-11-29 12:06:30.573129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.573149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.573162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.573182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.573196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.573216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.325 [2024-11-29 12:06:30.573230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.573250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.573264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.325 [2024-11-29 12:06:30.573283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.325 [2024-11-29 12:06:30.573298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.573333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.573374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.573411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.573445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.573479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.573513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.326 [2024-11-29 12:06:30.573546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.573602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.573635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.326 [2024-11-29 12:06:30.573669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.573702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.573735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.326 [2024-11-29 12:06:30.573769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.573810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.573846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.326 [2024-11-29 12:06:30.573881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.573915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.326 [2024-11-29 12:06:30.573949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.573969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.573983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.574016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.326 [2024-11-29 12:06:30.574051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.326 [2024-11-29 12:06:30.574085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.574118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.574152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.574185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.326 [2024-11-29 12:06:30.574219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.326 [2024-11-29 12:06:30.574260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.574293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.574327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.574361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.574397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.574431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.574474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.574524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.574565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.326 [2024-11-29 12:06:30.574599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.326 [2024-11-29 12:06:30.574632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.326 [2024-11-29 12:06:30.574666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:13.326 [2024-11-29 12:06:30.574693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.327 [2024-11-29 12:06:30.574708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.574729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.327 [2024-11-29 12:06:30.574743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.574763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.327 [2024-11-29 12:06:30.574777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.574802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.327 [2024-11-29 12:06:30.574817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.574838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.327 [2024-11-29 12:06:30.574852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.574872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.574886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.574906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.574920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.574942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.574957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.574977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.574998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.575019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.327 [2024-11-29 12:06:30.575033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.575054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.575073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.575094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.575108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.575128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.575150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.575171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.575186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.575205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.575220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.575239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.575254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.575273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.575287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.575308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.575322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.575342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.575356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.575376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.575390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.577180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.577224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.577259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.577295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.577343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.577382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.577422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.577458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.327 [2024-11-29 12:06:30.577492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.577540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.577577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.327 [2024-11-29 12:06:30.577612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.327 [2024-11-29 12:06:30.577647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.577697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.327 [2024-11-29 12:06:30.577734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.327 [2024-11-29 12:06:30.577785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.327 [2024-11-29 12:06:30.577821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.327 [2024-11-29 12:06:30.577867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:13.327 [2024-11-29 12:06:30.577890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.327 [2024-11-29 12:06:30.577904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:30.577940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:30.577960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:30.577983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:30.578000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:30.578022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:30.578038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.143569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:37.143674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.143745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:37.143768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.143791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:37.143807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.143829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:37.143845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.143882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:37.143898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.143919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:37.143935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.143961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.143993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.144050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.144066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.144086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.144102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.144123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.144136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.144156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:37.144170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.144191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.144206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.144227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.144241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.144261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.144275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.144295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.144327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.144348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.144363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.144386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.144401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.144425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.144440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.144463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.144477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.144508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:37.144548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.145028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:37.145050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.145074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.145089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.145112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:37.145126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.145148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:37.145162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.145184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:37.145199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.145221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:37.145236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.145258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:37.145272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.145294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.145326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.145350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.328 [2024-11-29 12:06:37.145365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.145389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.145403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.145427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.145441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.145464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.145489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:13.328 [2024-11-29 12:06:37.145514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.328 [2024-11-29 12:06:37.145529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.145566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.145585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.145609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.145625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.145648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.145678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.145714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.145729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.145751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.145765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.145787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.329 [2024-11-29 12:06:37.145801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.145823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.145837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.145878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.329 [2024-11-29 12:06:37.145897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.145920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.329 [2024-11-29 12:06:37.145934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.145956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.329 [2024-11-29 12:06:37.145970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.145992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.329 [2024-11-29 12:06:37.146096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.329 [2024-11-29 12:06:37.146133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.329 [2024-11-29 12:06:37.146206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.329 [2024-11-29 12:06:37.146242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.329 [2024-11-29 12:06:37.146278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.329 [2024-11-29 12:06:37.146465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.329 [2024-11-29 12:06:37.146661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:13.329 [2024-11-29 12:06:37.146916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.329 [2024-11-29 12:06:37.146930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.146952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.146966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.146989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.147004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.147051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.147089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.147125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.147160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.147196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.147232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.147269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.147339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.147376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.147414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.147453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.147491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.147558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.147600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.147639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.147677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.147715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.147753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.147777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.147792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.148734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.148763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.148797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.148814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.148842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.148858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.148886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.148900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.148928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.148943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.148972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.149000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.149030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.149045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.149074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.149089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.149117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.149132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.149160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.149174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.149202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.149217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.149245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.149260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.149289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.149321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.149368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.149387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.149419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.330 [2024-11-29 12:06:37.149435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.149465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.149480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.149510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.149526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:13.330 [2024-11-29 12:06:37.149569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.330 [2024-11-29 12:06:37.149587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:37.149627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:37.149645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:37.149690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.331 [2024-11-29 12:06:37.149720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:37.149748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.331 [2024-11-29 12:06:37.149763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:37.149791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.331 [2024-11-29 12:06:37.149806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:37.149833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:37.149848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:37.149876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:37.149891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:37.149919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.331 [2024-11-29 12:06:37.149934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.297447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.331 [2024-11-29 12:06:44.297561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.297623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.297659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.297683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.297697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.297717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.297731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.297752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.297766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.297812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.297828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.297848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.297863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.297883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.297897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.297916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.297931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.297951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.331 [2024-11-29 12:06:44.297965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.297985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.331 [2024-11-29 12:06:44.297999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.298033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.331 [2024-11-29 12:06:44.298067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.331 [2024-11-29 12:06:44.298100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.298133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.298166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.331 [2024-11-29 12:06:44.298199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.331 [2024-11-29 12:06:44.298247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.331 [2024-11-29 12:06:44.298284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.331 [2024-11-29 12:06:44.298357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.331 [2024-11-29 12:06:44.298394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.298429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.298463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.331 [2024-11-29 12:06:44.298498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.298533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.298587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.298622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.298671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.298705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.298749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.331 [2024-11-29 12:06:44.298785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:13.331 [2024-11-29 12:06:44.298805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.298819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.298840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.298854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.298875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.298890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.298910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.298924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.298946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.298960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.298980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.298994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.299029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.299062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.299097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.299131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.299165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.299208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.299242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.299276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.299326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.299361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.299396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.299431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.299467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.299502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.299582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.299621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.299661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.299709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.299746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.299785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.299822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.299860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.299899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.299936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.299973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.299993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.300016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.300033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.300055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.300070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.300092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.300107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.300131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.300147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.300169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.300193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.300217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.332 [2024-11-29 12:06:44.300233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.300255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.300270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.300295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.332 [2024-11-29 12:06:44.300311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:13.332 [2024-11-29 12:06:44.300334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.300349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.333 [2024-11-29 12:06:44.300386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.300426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.333 [2024-11-29 12:06:44.300463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.333 [2024-11-29 12:06:44.300500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.300556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.300594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.300631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.300677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.300717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.300769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.300805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.300841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.300877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.333 [2024-11-29 12:06:44.300914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.300963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.300984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.333 [2024-11-29 12:06:44.301014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.301035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.333 [2024-11-29 12:06:44.301050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.301070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.333 [2024-11-29 12:06:44.301084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.301105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.333 [2024-11-29 12:06:44.301119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.301139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.333 [2024-11-29 12:06:44.301159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.301182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.301197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.301217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.301232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.301253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.301267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.301287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.333 [2024-11-29 12:06:44.301334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.301356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.333 [2024-11-29 12:06:44.301371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.301392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.333 [2024-11-29 12:06:44.301407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.301428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.333 [2024-11-29 12:06:44.301443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.301464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.333 [2024-11-29 12:06:44.301479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.301500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.301514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.301535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.301561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.302725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.302753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.302787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.302804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.302845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.333 [2024-11-29 12:06:44.302863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:13.333 [2024-11-29 12:06:44.302891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:44.302906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.302933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:44.302948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.302976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:44.302991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:44.303033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:44.303075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.334 [2024-11-29 12:06:44.303118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.334 [2024-11-29 12:06:44.303169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:44.303212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:44.303254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:44.303313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:44.303358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.334 [2024-11-29 12:06:44.303416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.334 [2024-11-29 12:06:44.303460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:44.303505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.334 [2024-11-29 12:06:44.303591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:44.303639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.334 [2024-11-29 12:06:44.303685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.334 [2024-11-29 12:06:44.303737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:44.303782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:44.303828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.334 [2024-11-29 12:06:44.303921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.334 [2024-11-29 12:06:44.303965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.303993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:44.304009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:44.304037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.334 [2024-11-29 12:06:44.304061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.792985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:57.793052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.793080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:57.793095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.793111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:57.793125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.793139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:57.793152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.793166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:57.793179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.793193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:57.793206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.793220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:57.793232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.793247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:57.793259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.793273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:57.793285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.793299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:57.793328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.793343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:57.793356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.793371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:57.793384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.793432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:57.793446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.793460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:57.793474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.793488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.334 [2024-11-29 12:06:57.793501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.334 [2024-11-29 12:06:57.793515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.793543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.793570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.793585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.793600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.793614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.793629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.793643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.793658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.793678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.793694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.793707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.793721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.793734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.793749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.793761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.793776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.793789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.793804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.793827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.793843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.793857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.793871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.793884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.793899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.793912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.793926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.793938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.793953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.793966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.793980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.794017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.794046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.794075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.794102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.794128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.794154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.794180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.794215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.794241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.794267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.794293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.794319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.794347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.794373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.794400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.794433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.794459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.794486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.794514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.794562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.335 [2024-11-29 12:06:57.794591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.794617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.794643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.794670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.335 [2024-11-29 12:06:57.794696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.335 [2024-11-29 12:06:57.794710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.794722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.794736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.794748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.794762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.794775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.794789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.794801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.794815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.794827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.794842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.794855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.794868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.794881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.794901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.794914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.794928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.794941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.794955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.794969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.794984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.794997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.795305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.795331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.795411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.795439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.795492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.795655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.795682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.795735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.336 [2024-11-29 12:06:57.795761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.336 [2024-11-29 12:06:57.795828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.336 [2024-11-29 12:06:57.795840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.795854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.795867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.795881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.795894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.795909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.795922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.795936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.795948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.795962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.795982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.337 [2024-11-29 12:06:57.796068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.337 [2024-11-29 12:06:57.796200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.337 [2024-11-29 12:06:57.796226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.337 [2024-11-29 12:06:57.796279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.337 [2024-11-29 12:06:57.796322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.337 [2024-11-29 12:06:57.796619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.337 [2024-11-29 12:06:57.796836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3100 is same with the state(5) to be set 00:23:13.337 [2024-11-29 12:06:57.796866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.337 [2024-11-29 12:06:57.796876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.337 [2024-11-29 12:06:57.796886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113384 len:8 PRP1 0x0 PRP2 0x0 00:23:13.337 [2024-11-29 12:06:57.796899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.337 [2024-11-29 12:06:57.796966] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8c3100 was disconnected and freed. reset controller. 00:23:13.337 [2024-11-29 12:06:57.798067] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:13.337 [2024-11-29 12:06:57.798152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d23c0 (9): Bad file descriptor 00:23:13.337 [2024-11-29 12:06:57.798464] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.337 [2024-11-29 12:06:57.798555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.337 [2024-11-29 12:06:57.798612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.337 [2024-11-29 12:06:57.798635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d23c0 with addr=10.0.0.2, port=4421 00:23:13.337 [2024-11-29 12:06:57.798650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d23c0 is same with the state(5) to be set 00:23:13.337 [2024-11-29 12:06:57.798683] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d23c0 (9): Bad file descriptor 00:23:13.338 [2024-11-29 12:06:57.798711] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:13.338 [2024-11-29 12:06:57.798727] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:13.338 [2024-11-29 12:06:57.798741] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:13.338 [2024-11-29 12:06:57.798771] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.338 [2024-11-29 12:06:57.798788] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:13.338 [2024-11-29 12:07:07.861430] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:13.338 Received shutdown signal, test time was about 55.814399 seconds 00:23:13.338 00:23:13.338 Latency(us) 00:23:13.338 [2024-11-29T12:07:18.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.338 [2024-11-29T12:07:18.849Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:13.338 Verification LBA range: start 0x0 length 0x4000 00:23:13.338 Nvme0n1 : 55.81 10112.00 39.50 0.00 0.00 12638.99 175.94 7046430.72 00:23:13.338 [2024-11-29T12:07:18.849Z] =================================================================================================================== 00:23:13.338 [2024-11-29T12:07:18.849Z] Total : 10112.00 39.50 0.00 0.00 12638.99 175.94 7046430.72 00:23:13.338 12:07:18 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:13.338 12:07:18 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:23:13.338 12:07:18 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:13.338 12:07:18 -- host/multipath.sh@125 -- # nvmftestfini 00:23:13.338 12:07:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:13.338 12:07:18 -- nvmf/common.sh@116 -- # sync 00:23:13.338 12:07:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:13.338 12:07:18 -- nvmf/common.sh@119 -- # set +e 00:23:13.338 12:07:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:13.338 12:07:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:13.338 rmmod nvme_tcp 00:23:13.338 rmmod nvme_fabrics 00:23:13.338 rmmod nvme_keyring 00:23:13.338 12:07:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:13.338 12:07:18 -- nvmf/common.sh@123 -- # set -e 00:23:13.338 12:07:18 -- nvmf/common.sh@124 -- # return 0 00:23:13.338 12:07:18 -- nvmf/common.sh@477 -- # '[' -n 84915 ']' 00:23:13.338 12:07:18 -- nvmf/common.sh@478 -- # killprocess 84915 00:23:13.338 12:07:18 -- common/autotest_common.sh@936 -- # '[' -z 84915 ']' 00:23:13.338 12:07:18 -- common/autotest_common.sh@940 -- # kill -0 84915 00:23:13.338 12:07:18 -- common/autotest_common.sh@941 -- # uname 00:23:13.338 12:07:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:13.338 12:07:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84915 00:23:13.338 12:07:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:13.338 killing process with pid 84915 00:23:13.338 12:07:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:13.338 12:07:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84915' 00:23:13.338 12:07:18 -- common/autotest_common.sh@955 -- # kill 84915 00:23:13.338 12:07:18 -- common/autotest_common.sh@960 -- # wait 84915 00:23:13.597 12:07:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:13.597 12:07:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:13.597 12:07:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:13.597 12:07:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.597 12:07:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:13.597 12:07:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.597 12:07:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.597 12:07:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.597 12:07:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:13.597 ************************************ 00:23:13.597 END TEST nvmf_multipath 00:23:13.597 ************************************ 00:23:13.597 00:23:13.597 real 1m2.310s 00:23:13.597 user 2m52.965s 00:23:13.597 sys 0m18.167s 00:23:13.597 12:07:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:13.597 12:07:19 -- common/autotest_common.sh@10 -- # set +x 00:23:13.857 12:07:19 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:13.857 12:07:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:13.857 12:07:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:13.857 12:07:19 -- common/autotest_common.sh@10 -- # set +x 00:23:13.857 ************************************ 00:23:13.857 START TEST nvmf_timeout 00:23:13.857 ************************************ 00:23:13.857 12:07:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:13.857 * Looking for test storage... 00:23:13.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:13.857 12:07:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:13.857 12:07:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:13.857 12:07:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:13.857 12:07:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:13.857 12:07:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:13.857 12:07:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:13.857 12:07:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:13.857 12:07:19 -- scripts/common.sh@335 -- # IFS=.-: 00:23:13.857 12:07:19 -- scripts/common.sh@335 -- # read -ra ver1 00:23:13.857 12:07:19 -- scripts/common.sh@336 -- # IFS=.-: 00:23:13.857 12:07:19 -- scripts/common.sh@336 -- # read -ra ver2 00:23:13.857 12:07:19 -- scripts/common.sh@337 -- # local 'op=<' 00:23:13.857 12:07:19 -- scripts/common.sh@339 -- # ver1_l=2 00:23:13.857 12:07:19 -- scripts/common.sh@340 -- # ver2_l=1 00:23:13.857 12:07:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:13.857 12:07:19 -- scripts/common.sh@343 -- # case "$op" in 00:23:13.857 12:07:19 -- scripts/common.sh@344 -- # : 1 00:23:13.857 12:07:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:13.857 12:07:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:13.857 12:07:19 -- scripts/common.sh@364 -- # decimal 1 00:23:13.857 12:07:19 -- scripts/common.sh@352 -- # local d=1 00:23:13.857 12:07:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:13.857 12:07:19 -- scripts/common.sh@354 -- # echo 1 00:23:13.857 12:07:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:13.857 12:07:19 -- scripts/common.sh@365 -- # decimal 2 00:23:13.857 12:07:19 -- scripts/common.sh@352 -- # local d=2 00:23:13.857 12:07:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:13.857 12:07:19 -- scripts/common.sh@354 -- # echo 2 00:23:13.857 12:07:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:13.857 12:07:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:13.857 12:07:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:13.857 12:07:19 -- scripts/common.sh@367 -- # return 0 00:23:13.857 12:07:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:13.857 12:07:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:13.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.857 --rc genhtml_branch_coverage=1 00:23:13.857 --rc genhtml_function_coverage=1 00:23:13.857 --rc genhtml_legend=1 00:23:13.857 --rc geninfo_all_blocks=1 00:23:13.857 --rc geninfo_unexecuted_blocks=1 00:23:13.857 00:23:13.857 ' 00:23:13.857 12:07:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:13.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.857 --rc genhtml_branch_coverage=1 00:23:13.857 --rc genhtml_function_coverage=1 00:23:13.857 --rc genhtml_legend=1 00:23:13.857 --rc geninfo_all_blocks=1 00:23:13.857 --rc geninfo_unexecuted_blocks=1 00:23:13.857 00:23:13.857 ' 00:23:13.857 12:07:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:13.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.857 --rc genhtml_branch_coverage=1 00:23:13.857 --rc genhtml_function_coverage=1 00:23:13.857 --rc genhtml_legend=1 00:23:13.857 --rc geninfo_all_blocks=1 00:23:13.857 --rc geninfo_unexecuted_blocks=1 00:23:13.857 00:23:13.857 ' 00:23:13.857 12:07:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:13.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.857 --rc genhtml_branch_coverage=1 00:23:13.857 --rc genhtml_function_coverage=1 00:23:13.857 --rc genhtml_legend=1 00:23:13.857 --rc geninfo_all_blocks=1 00:23:13.857 --rc geninfo_unexecuted_blocks=1 00:23:13.857 00:23:13.857 ' 00:23:13.857 12:07:19 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:13.857 12:07:19 -- nvmf/common.sh@7 -- # uname -s 00:23:13.857 12:07:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.857 12:07:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.857 12:07:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.857 12:07:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.857 12:07:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.857 12:07:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.857 12:07:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.857 12:07:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.857 12:07:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.857 12:07:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.857 12:07:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:23:13.857 12:07:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:23:13.857 12:07:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.857 12:07:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.857 12:07:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:13.857 12:07:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:13.857 12:07:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.857 12:07:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.857 12:07:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.857 12:07:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.857 12:07:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.858 12:07:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.858 12:07:19 -- paths/export.sh@5 -- # export PATH 00:23:13.858 12:07:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.858 12:07:19 -- nvmf/common.sh@46 -- # : 0 00:23:13.858 12:07:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:13.858 12:07:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:13.858 12:07:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:13.858 12:07:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.858 12:07:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.858 12:07:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:13.858 12:07:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:13.858 12:07:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:13.858 12:07:19 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:13.858 12:07:19 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:13.858 12:07:19 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:13.858 12:07:19 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:13.858 12:07:19 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:13.858 12:07:19 -- host/timeout.sh@19 -- # nvmftestinit 00:23:13.858 12:07:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:13.858 12:07:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.858 12:07:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:13.858 12:07:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:13.858 12:07:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:13.858 12:07:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.858 12:07:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.858 12:07:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.858 12:07:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:13.858 12:07:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:13.858 12:07:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:13.858 12:07:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:13.858 12:07:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:13.858 12:07:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:13.858 12:07:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.858 12:07:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.858 12:07:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:13.858 12:07:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:13.858 12:07:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:13.858 12:07:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:13.858 12:07:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:13.858 12:07:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.858 12:07:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:13.858 12:07:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:13.858 12:07:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:13.858 12:07:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:13.858 12:07:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:14.117 12:07:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:14.117 Cannot find device "nvmf_tgt_br" 00:23:14.117 12:07:19 -- nvmf/common.sh@154 -- # true 00:23:14.117 12:07:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:14.117 Cannot find device "nvmf_tgt_br2" 00:23:14.117 12:07:19 -- nvmf/common.sh@155 -- # true 00:23:14.117 12:07:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:14.117 12:07:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:14.117 Cannot find device "nvmf_tgt_br" 00:23:14.117 12:07:19 -- nvmf/common.sh@157 -- # true 00:23:14.117 12:07:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:14.117 Cannot find device "nvmf_tgt_br2" 00:23:14.117 12:07:19 -- nvmf/common.sh@158 -- # true 00:23:14.117 12:07:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:14.117 12:07:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:14.117 12:07:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:14.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:14.117 12:07:19 -- nvmf/common.sh@161 -- # true 00:23:14.117 12:07:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:14.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:14.117 12:07:19 -- nvmf/common.sh@162 -- # true 00:23:14.117 12:07:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:14.117 12:07:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:14.117 12:07:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:14.117 12:07:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:14.117 12:07:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:14.117 12:07:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:14.117 12:07:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:14.117 12:07:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:14.117 12:07:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:14.117 12:07:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:14.117 12:07:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:14.117 12:07:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:14.117 12:07:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:14.118 12:07:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:14.118 12:07:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:14.118 12:07:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:14.377 12:07:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:14.377 12:07:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:14.377 12:07:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:14.377 12:07:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:14.377 12:07:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:14.377 12:07:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:14.377 12:07:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:14.377 12:07:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:14.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:23:14.377 00:23:14.377 --- 10.0.0.2 ping statistics --- 00:23:14.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.377 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:23:14.377 12:07:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:14.377 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:14.377 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:23:14.377 00:23:14.377 --- 10.0.0.3 ping statistics --- 00:23:14.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.377 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:23:14.377 12:07:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:14.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:23:14.377 00:23:14.377 --- 10.0.0.1 ping statistics --- 00:23:14.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.377 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:23:14.377 12:07:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.377 12:07:19 -- nvmf/common.sh@421 -- # return 0 00:23:14.377 12:07:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:14.377 12:07:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.377 12:07:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:14.377 12:07:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:14.377 12:07:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.377 12:07:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:14.377 12:07:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:14.377 12:07:19 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:23:14.377 12:07:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:14.377 12:07:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:14.377 12:07:19 -- common/autotest_common.sh@10 -- # set +x 00:23:14.377 12:07:19 -- nvmf/common.sh@469 -- # nvmfpid=86100 00:23:14.377 12:07:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:14.377 12:07:19 -- nvmf/common.sh@470 -- # waitforlisten 86100 00:23:14.377 12:07:19 -- common/autotest_common.sh@829 -- # '[' -z 86100 ']' 00:23:14.377 12:07:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.377 12:07:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.377 12:07:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.377 12:07:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.377 12:07:19 -- common/autotest_common.sh@10 -- # set +x 00:23:14.377 [2024-11-29 12:07:19.787003] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:14.377 [2024-11-29 12:07:19.787118] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.636 [2024-11-29 12:07:19.925652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:14.636 [2024-11-29 12:07:20.010721] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:14.636 [2024-11-29 12:07:20.010902] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.636 [2024-11-29 12:07:20.010915] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.636 [2024-11-29 12:07:20.010923] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.636 [2024-11-29 12:07:20.013669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.636 [2024-11-29 12:07:20.013687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.572 12:07:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.572 12:07:20 -- common/autotest_common.sh@862 -- # return 0 00:23:15.572 12:07:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:15.572 12:07:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:15.572 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:23:15.572 12:07:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.572 12:07:20 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:15.572 12:07:20 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:15.830 [2024-11-29 12:07:21.142723] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.830 12:07:21 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:16.089 Malloc0 00:23:16.089 12:07:21 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:16.347 12:07:21 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:16.606 12:07:21 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:16.865 [2024-11-29 12:07:22.208491] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.865 12:07:22 -- host/timeout.sh@32 -- # bdevperf_pid=86149 00:23:16.865 12:07:22 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:16.865 12:07:22 -- host/timeout.sh@34 -- # waitforlisten 86149 /var/tmp/bdevperf.sock 00:23:16.865 12:07:22 -- common/autotest_common.sh@829 -- # '[' -z 86149 ']' 00:23:16.865 12:07:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.865 12:07:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.865 12:07:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.865 12:07:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.865 12:07:22 -- common/autotest_common.sh@10 -- # set +x 00:23:16.865 [2024-11-29 12:07:22.277876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:16.866 [2024-11-29 12:07:22.277985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86149 ] 00:23:17.124 [2024-11-29 12:07:22.414679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.124 [2024-11-29 12:07:22.510742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.061 12:07:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.061 12:07:23 -- common/autotest_common.sh@862 -- # return 0 00:23:18.061 12:07:23 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:18.061 12:07:23 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:18.724 NVMe0n1 00:23:18.724 12:07:23 -- host/timeout.sh@51 -- # rpc_pid=86178 00:23:18.724 12:07:23 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.724 12:07:23 -- host/timeout.sh@53 -- # sleep 1 00:23:18.724 Running I/O for 10 seconds... 00:23:19.664 12:07:24 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:19.664 [2024-11-29 12:07:25.153673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153760] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153812] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153872] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153880] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153897] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153905] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153921] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153970] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153987] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.153995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ca60 is same with the state(5) to be set 00:23:19.664 [2024-11-29 12:07:25.154065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-11-29 12:07:25.154094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-11-29 12:07:25.154116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-11-29 12:07:25.154127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-11-29 12:07:25.154249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-11-29 12:07:25.154265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-11-29 12:07:25.154278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-11-29 12:07:25.154287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-11-29 12:07:25.154298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-11-29 12:07:25.154307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-11-29 12:07:25.154318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-11-29 12:07:25.154327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-11-29 12:07:25.154338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-11-29 12:07:25.154347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-11-29 12:07:25.154358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-11-29 12:07:25.154366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-11-29 12:07:25.154378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-11-29 12:07:25.154387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-11-29 12:07:25.154398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-11-29 12:07:25.154407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-11-29 12:07:25.154418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-11-29 12:07:25.154427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-11-29 12:07:25.154438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-11-29 12:07:25.154446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-11-29 12:07:25.154458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-11-29 12:07:25.154467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-11-29 12:07:25.154477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-11-29 12:07:25.154486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-11-29 12:07:25.154497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-11-29 12:07:25.154516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.665 [2024-11-29 12:07:25.154748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.665 [2024-11-29 12:07:25.154768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.665 [2024-11-29 12:07:25.154855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.665 [2024-11-29 12:07:25.154896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.665 [2024-11-29 12:07:25.154916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.665 [2024-11-29 12:07:25.154957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.154987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.154996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.155007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.155016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.155027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.155036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.155055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.155064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.155074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.155083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.155094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.155103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.155115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-11-29 12:07:25.155124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.155136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.665 [2024-11-29 12:07:25.155145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.155156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.665 [2024-11-29 12:07:25.155165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-11-29 12:07:25.155176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.666 [2024-11-29 12:07:25.155208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.666 [2024-11-29 12:07:25.155392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.666 [2024-11-29 12:07:25.155415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.666 [2024-11-29 12:07:25.155460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.666 [2024-11-29 12:07:25.155480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.666 [2024-11-29 12:07:25.155535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.666 [2024-11-29 12:07:25.155589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.666 [2024-11-29 12:07:25.155609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.666 [2024-11-29 12:07:25.155629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.666 [2024-11-29 12:07:25.155868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.666 [2024-11-29 12:07:25.155970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.155982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-11-29 12:07:25.155991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.156002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.666 [2024-11-29 12:07:25.156011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-11-29 12:07:25.156021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.667 [2024-11-29 12:07:25.156030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.667 [2024-11-29 12:07:25.156051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.667 [2024-11-29 12:07:25.156093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.667 [2024-11-29 12:07:25.156133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.667 [2024-11-29 12:07:25.156483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.667 [2024-11-29 12:07:25.156503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.667 [2024-11-29 12:07:25.156533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.667 [2024-11-29 12:07:25.156553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.667 [2024-11-29 12:07:25.156573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.667 [2024-11-29 12:07:25.156658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.667 [2024-11-29 12:07:25.156678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.667 [2024-11-29 12:07:25.156698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.667 [2024-11-29 12:07:25.156759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.667 [2024-11-29 12:07:25.156798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-11-29 12:07:25.156829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-11-29 12:07:25.156838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.668 [2024-11-29 12:07:25.156849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.668 [2024-11-29 12:07:25.156858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.668 [2024-11-29 12:07:25.156869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.668 [2024-11-29 12:07:25.156878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.668 [2024-11-29 12:07:25.156889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5fc9a0 is same with the state(5) to be set 00:23:19.668 [2024-11-29 12:07:25.156902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.668 [2024-11-29 12:07:25.156915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.668 [2024-11-29 12:07:25.156924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115208 len:8 PRP1 0x0 PRP2 0x0 00:23:19.668 [2024-11-29 12:07:25.156933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.668 [2024-11-29 12:07:25.156992] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5fc9a0 was disconnected and freed. reset controller. 00:23:19.668 [2024-11-29 12:07:25.157250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.668 [2024-11-29 12:07:25.157342] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x601610 (9): Bad file descriptor 00:23:19.668 [2024-11-29 12:07:25.157460] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.668 [2024-11-29 12:07:25.157541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.668 [2024-11-29 12:07:25.157595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.668 [2024-11-29 12:07:25.157611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x601610 with addr=10.0.0.2, port=4420 00:23:19.668 [2024-11-29 12:07:25.157622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x601610 is same with the state(5) to be set 00:23:19.668 [2024-11-29 12:07:25.157642] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x601610 (9): Bad file descriptor 00:23:19.668 [2024-11-29 12:07:25.157659] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:19.668 [2024-11-29 12:07:25.157669] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:19.668 [2024-11-29 12:07:25.157679] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:19.668 [2024-11-29 12:07:25.157704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.668 [2024-11-29 12:07:25.157725] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.927 12:07:25 -- host/timeout.sh@56 -- # sleep 2 00:23:21.830 [2024-11-29 12:07:27.157863] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.830 [2024-11-29 12:07:27.158014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.830 [2024-11-29 12:07:27.158056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.830 [2024-11-29 12:07:27.158072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x601610 with addr=10.0.0.2, port=4420 00:23:21.830 [2024-11-29 12:07:27.158085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x601610 is same with the state(5) to be set 00:23:21.830 [2024-11-29 12:07:27.158117] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x601610 (9): Bad file descriptor 00:23:21.830 [2024-11-29 12:07:27.158149] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:21.830 [2024-11-29 12:07:27.158161] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:21.830 [2024-11-29 12:07:27.158172] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:21.830 [2024-11-29 12:07:27.158201] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.830 [2024-11-29 12:07:27.158214] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:21.830 12:07:27 -- host/timeout.sh@57 -- # get_controller 00:23:21.830 12:07:27 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:21.830 12:07:27 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:22.089 12:07:27 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:23:22.089 12:07:27 -- host/timeout.sh@58 -- # get_bdev 00:23:22.089 12:07:27 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:22.089 12:07:27 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:22.347 12:07:27 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:23:22.347 12:07:27 -- host/timeout.sh@61 -- # sleep 5 00:23:23.724 [2024-11-29 12:07:29.158427] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.724 [2024-11-29 12:07:29.158595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.724 [2024-11-29 12:07:29.158636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.724 [2024-11-29 12:07:29.158652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x601610 with addr=10.0.0.2, port=4420 00:23:23.724 [2024-11-29 12:07:29.158668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x601610 is same with the state(5) to be set 00:23:23.724 [2024-11-29 12:07:29.158701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x601610 (9): Bad file descriptor 00:23:23.724 [2024-11-29 12:07:29.158721] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:23.724 [2024-11-29 12:07:29.158731] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:23.724 [2024-11-29 12:07:29.158742] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:23.724 [2024-11-29 12:07:29.158774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.724 [2024-11-29 12:07:29.158787] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:26.255 [2024-11-29 12:07:31.158848] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:26.255 [2024-11-29 12:07:31.158939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:26.255 [2024-11-29 12:07:31.158952] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:26.255 [2024-11-29 12:07:31.158963] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:26.255 [2024-11-29 12:07:31.158995] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.823 00:23:26.823 Latency(us) 00:23:26.823 [2024-11-29T12:07:32.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.823 [2024-11-29T12:07:32.334Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:26.823 Verification LBA range: start 0x0 length 0x4000 00:23:26.823 NVMe0n1 : 8.15 1760.12 6.88 15.71 0.00 71967.83 3485.32 7015926.69 00:23:26.823 [2024-11-29T12:07:32.334Z] =================================================================================================================== 00:23:26.823 [2024-11-29T12:07:32.334Z] Total : 1760.12 6.88 15.71 0.00 71967.83 3485.32 7015926.69 00:23:26.823 0 00:23:27.388 12:07:32 -- host/timeout.sh@62 -- # get_controller 00:23:27.388 12:07:32 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:27.388 12:07:32 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:27.647 12:07:33 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:23:27.647 12:07:33 -- host/timeout.sh@63 -- # get_bdev 00:23:27.648 12:07:33 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:27.648 12:07:33 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:27.907 12:07:33 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:23:27.907 12:07:33 -- host/timeout.sh@65 -- # wait 86178 00:23:27.907 12:07:33 -- host/timeout.sh@67 -- # killprocess 86149 00:23:27.907 12:07:33 -- common/autotest_common.sh@936 -- # '[' -z 86149 ']' 00:23:27.907 12:07:33 -- common/autotest_common.sh@940 -- # kill -0 86149 00:23:27.907 12:07:33 -- common/autotest_common.sh@941 -- # uname 00:23:27.907 12:07:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:27.907 12:07:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86149 00:23:27.907 12:07:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:27.907 12:07:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:27.907 killing process with pid 86149 00:23:27.907 12:07:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86149' 00:23:27.907 12:07:33 -- common/autotest_common.sh@955 -- # kill 86149 00:23:27.907 Received shutdown signal, test time was about 9.327060 seconds 00:23:27.907 00:23:27.907 Latency(us) 00:23:27.907 [2024-11-29T12:07:33.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.907 [2024-11-29T12:07:33.418Z] =================================================================================================================== 00:23:27.907 [2024-11-29T12:07:33.418Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.907 12:07:33 -- common/autotest_common.sh@960 -- # wait 86149 00:23:28.166 12:07:33 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.425 [2024-11-29 12:07:33.898388] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.425 12:07:33 -- host/timeout.sh@74 -- # bdevperf_pid=86301 00:23:28.425 12:07:33 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:28.425 12:07:33 -- host/timeout.sh@76 -- # waitforlisten 86301 /var/tmp/bdevperf.sock 00:23:28.425 12:07:33 -- common/autotest_common.sh@829 -- # '[' -z 86301 ']' 00:23:28.425 12:07:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.425 12:07:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:28.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.425 12:07:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.425 12:07:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:28.425 12:07:33 -- common/autotest_common.sh@10 -- # set +x 00:23:28.685 [2024-11-29 12:07:33.962270] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:28.685 [2024-11-29 12:07:33.962358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86301 ] 00:23:28.685 [2024-11-29 12:07:34.108648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.943 [2024-11-29 12:07:34.231794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.522 12:07:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.522 12:07:34 -- common/autotest_common.sh@862 -- # return 0 00:23:29.522 12:07:34 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:29.851 12:07:35 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:23:30.109 NVMe0n1 00:23:30.109 12:07:35 -- host/timeout.sh@84 -- # rpc_pid=86323 00:23:30.109 12:07:35 -- host/timeout.sh@86 -- # sleep 1 00:23:30.109 12:07:35 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:30.367 Running I/O for 10 seconds... 00:23:31.306 12:07:36 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:31.306 [2024-11-29 12:07:36.789473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.306 [2024-11-29 12:07:36.789512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c1b0 is same with the state(5) to be set 00:23:31.306 [2024-11-29 12:07:36.789592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c1b0 is same with [2024-11-29 12:07:36.789590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:31.306 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.306 [2024-11-29 12:07:36.789609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.306 [2024-11-29 12:07:36.789605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c1b0 is same with [2024-11-29 12:07:36.789619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:31.306 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.306 [2024-11-29 12:07:36.789630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-29 12:07:36.789630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c1b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:31.306 the state(5) to be set 00:23:31.306 [2024-11-29 12:07:36.789642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-29 12:07:36.789642] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c1b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.306 the state(5) to be set 00:23:31.306 [2024-11-29 12:07:36.789653] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c1b0 is same with [2024-11-29 12:07:36.789654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:23:31.306 id:0 cdw10:00000000 cdw11:00000000 00:23:31.306 [2024-11-29 12:07:36.789663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c1b0 is same with [2024-11-29 12:07:36.789664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:31.306 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.306 [2024-11-29 12:07:36.789674] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c1b0 is same with the state(5) to be set 00:23:31.306 [2024-11-29 12:07:36.789675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979450 is same with the state(5) to be set 00:23:31.306 [2024-11-29 12:07:36.789683] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c1b0 is same with the state(5) to be set 00:23:31.306 [2024-11-29 12:07:36.789703] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c1b0 is same with the state(5) to be set 00:23:31.306 [2024-11-29 12:07:36.789711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c1b0 is same with the state(5) to be set 00:23:31.306 [2024-11-29 12:07:36.789784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.306 [2024-11-29 12:07:36.789801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.789821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.789832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.789843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.789853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.789864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.789874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.789885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.789895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.789924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.789949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.789961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.789970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.789982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.307 [2024-11-29 12:07:36.790145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.307 [2024-11-29 12:07:36.790397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.307 [2024-11-29 12:07:36.790448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.307 [2024-11-29 12:07:36.790492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.307 [2024-11-29 12:07:36.790549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.307 [2024-11-29 12:07:36.790569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.307 [2024-11-29 12:07:36.790606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.307 [2024-11-29 12:07:36.790661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.307 [2024-11-29 12:07:36.790730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.307 [2024-11-29 12:07:36.790779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.307 [2024-11-29 12:07:36.790800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.307 [2024-11-29 12:07:36.790829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.307 [2024-11-29 12:07:36.790838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.790848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.790856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.790866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.790876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.790886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.790895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.790905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.790914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.790925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.790933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.790943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.790952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.790962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.790971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.790982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.790990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.308 [2024-11-29 12:07:36.791042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.308 [2024-11-29 12:07:36.791077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.308 [2024-11-29 12:07:36.791114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.308 [2024-11-29 12:07:36.791168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.308 [2024-11-29 12:07:36.791224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.308 [2024-11-29 12:07:36.791278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:116176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.308 [2024-11-29 12:07:36.791334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.308 [2024-11-29 12:07:36.791372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.308 [2024-11-29 12:07:36.791408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.308 [2024-11-29 12:07:36.791655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.308 [2024-11-29 12:07:36.791696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.308 [2024-11-29 12:07:36.791716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.308 [2024-11-29 12:07:36.791726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.791737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.791748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.791758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.791769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.791779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.791790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.791800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.791811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.791821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.791832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.791841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.791852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.791877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.791902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.791912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.791922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.791931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.791943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.791952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.791987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.791997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.792017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.792037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.792056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.792106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.792291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.792329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.792404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.792486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.792505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.792523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.792562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.792581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.309 [2024-11-29 12:07:36.792629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.309 [2024-11-29 12:07:36.792647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.309 [2024-11-29 12:07:36.792657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.310 [2024-11-29 12:07:36.792665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.310 [2024-11-29 12:07:36.792675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.310 [2024-11-29 12:07:36.792683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.310 [2024-11-29 12:07:36.792700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.310 [2024-11-29 12:07:36.792708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.310 [2024-11-29 12:07:36.792718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.310 [2024-11-29 12:07:36.792727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.310 [2024-11-29 12:07:36.792737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.310 [2024-11-29 12:07:36.792745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.310 [2024-11-29 12:07:36.792755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.310 [2024-11-29 12:07:36.792763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.310 [2024-11-29 12:07:36.792794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.310 [2024-11-29 12:07:36.792803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.310 [2024-11-29 12:07:36.792840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:31.310 [2024-11-29 12:07:36.792850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:31.310 [2024-11-29 12:07:36.792858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115880 len:8 PRP1 0x0 PRP2 0x0 00:23:31.310 [2024-11-29 12:07:36.792867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.310 [2024-11-29 12:07:36.792945] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x974870 was disconnected and freed. reset controller. 00:23:31.310 [2024-11-29 12:07:36.793216] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:31.310 [2024-11-29 12:07:36.793265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979450 (9): Bad file descriptor 00:23:31.310 [2024-11-29 12:07:36.793413] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.310 [2024-11-29 12:07:36.793501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.310 [2024-11-29 12:07:36.793562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.310 [2024-11-29 12:07:36.793580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x979450 with addr=10.0.0.2, port=4420 00:23:31.310 [2024-11-29 12:07:36.793593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979450 is same with the state(5) to be set 00:23:31.310 [2024-11-29 12:07:36.793629] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979450 (9): Bad file descriptor 00:23:31.310 [2024-11-29 12:07:36.793661] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:31.310 [2024-11-29 12:07:36.793670] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:31.310 [2024-11-29 12:07:36.793681] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:31.310 [2024-11-29 12:07:36.793716] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.310 [2024-11-29 12:07:36.793727] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:31.310 12:07:36 -- host/timeout.sh@90 -- # sleep 1 00:23:32.687 [2024-11-29 12:07:37.793904] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.687 [2024-11-29 12:07:37.794036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.687 [2024-11-29 12:07:37.794080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.687 [2024-11-29 12:07:37.794096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x979450 with addr=10.0.0.2, port=4420 00:23:32.687 [2024-11-29 12:07:37.794111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979450 is same with the state(5) to be set 00:23:32.687 [2024-11-29 12:07:37.794139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979450 (9): Bad file descriptor 00:23:32.687 [2024-11-29 12:07:37.794157] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:32.687 [2024-11-29 12:07:37.794166] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:32.687 [2024-11-29 12:07:37.794177] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:32.687 [2024-11-29 12:07:37.794207] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.687 [2024-11-29 12:07:37.794218] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:32.687 12:07:37 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.687 [2024-11-29 12:07:38.064289] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.687 12:07:38 -- host/timeout.sh@92 -- # wait 86323 00:23:33.624 [2024-11-29 12:07:38.810258] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:40.223 00:23:40.223 Latency(us) 00:23:40.223 [2024-11-29T12:07:45.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.223 [2024-11-29T12:07:45.734Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:40.223 Verification LBA range: start 0x0 length 0x4000 00:23:40.223 NVMe0n1 : 10.01 8848.04 34.56 0.00 0.00 14443.29 1266.04 3019898.88 00:23:40.223 [2024-11-29T12:07:45.734Z] =================================================================================================================== 00:23:40.223 [2024-11-29T12:07:45.734Z] Total : 8848.04 34.56 0.00 0.00 14443.29 1266.04 3019898.88 00:23:40.223 0 00:23:40.223 12:07:45 -- host/timeout.sh@97 -- # rpc_pid=86429 00:23:40.223 12:07:45 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:40.223 12:07:45 -- host/timeout.sh@98 -- # sleep 1 00:23:40.497 Running I/O for 10 seconds... 00:23:41.435 12:07:46 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.435 [2024-11-29 12:07:46.937451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.435 [2024-11-29 12:07:46.937526] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937559] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937568] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937595] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937622] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937638] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937680] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69d80 is same with the state(5) to be set 00:23:41.436 [2024-11-29 12:07:46.937833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.937891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.937915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.937926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.937937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.937945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.937955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.937965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.937975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.937984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.937994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.938003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.938013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.938025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.938034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.938042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.938052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.938061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.938071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.938080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.938091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.938099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.938109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.938117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.938128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.938136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.938147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.938155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.938165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.938173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.938183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.938191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.938201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.938213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.938224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.436 [2024-11-29 12:07:46.938233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.436 [2024-11-29 12:07:46.938243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.437 [2024-11-29 12:07:46.938363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.437 [2024-11-29 12:07:46.938382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.437 [2024-11-29 12:07:46.938403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.437 [2024-11-29 12:07:46.938421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.437 [2024-11-29 12:07:46.938440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.437 [2024-11-29 12:07:46.938458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.437 [2024-11-29 12:07:46.938477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.437 [2024-11-29 12:07:46.938496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.437 [2024-11-29 12:07:46.938517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.437 [2024-11-29 12:07:46.938568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.437 [2024-11-29 12:07:46.938681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.437 [2024-11-29 12:07:46.938725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.437 [2024-11-29 12:07:46.938853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.437 [2024-11-29 12:07:46.938863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.938873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.938883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.938893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.938903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.938912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.938923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.438 [2024-11-29 12:07:46.938932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.938942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.938951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.938961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.438 [2024-11-29 12:07:46.938970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.938980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.938989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.938999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.438 [2024-11-29 12:07:46.939008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.438 [2024-11-29 12:07:46.939045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.438 [2024-11-29 12:07:46.939101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.438 [2024-11-29 12:07:46.939176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.438 [2024-11-29 12:07:46.939233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.438 [2024-11-29 12:07:46.939443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.438 [2024-11-29 12:07:46.939463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.438 [2024-11-29 12:07:46.939473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.939483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.939502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.439 [2024-11-29 12:07:46.939522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.439 [2024-11-29 12:07:46.939565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.439 [2024-11-29 12:07:46.939613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.439 [2024-11-29 12:07:46.939633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.939653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.939673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.939693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.439 [2024-11-29 12:07:46.939714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.939734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.939754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.939774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.939794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.939815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.939836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.939871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.439 [2024-11-29 12:07:46.939905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.439 [2024-11-29 12:07:46.939924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.439 [2024-11-29 12:07:46.939942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.439 [2024-11-29 12:07:46.939960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.439 [2024-11-29 12:07:46.939977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.939987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.439 [2024-11-29 12:07:46.939996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.940006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.439 [2024-11-29 12:07:46.940014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.940024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.439 [2024-11-29 12:07:46.940033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.940059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.940068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.940078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.940086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.940097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.940106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.940116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.439 [2024-11-29 12:07:46.940125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.940136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.940145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.940165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.439 [2024-11-29 12:07:46.940174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.940185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.439 [2024-11-29 12:07:46.940194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.439 [2024-11-29 12:07:46.940205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.440 [2024-11-29 12:07:46.940214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.440 [2024-11-29 12:07:46.940232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.440 [2024-11-29 12:07:46.940251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.440 [2024-11-29 12:07:46.940270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.440 [2024-11-29 12:07:46.940288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.440 [2024-11-29 12:07:46.940340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.440 [2024-11-29 12:07:46.940361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.440 [2024-11-29 12:07:46.940381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.440 [2024-11-29 12:07:46.940401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.440 [2024-11-29 12:07:46.940422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.440 [2024-11-29 12:07:46.940442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.440 [2024-11-29 12:07:46.940463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.440 [2024-11-29 12:07:46.940482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.440 [2024-11-29 12:07:46.940503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.440 [2024-11-29 12:07:46.940530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2c770 is same with the state(5) to be set 00:23:41.440 [2024-11-29 12:07:46.940564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:41.440 [2024-11-29 12:07:46.940573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:41.440 [2024-11-29 12:07:46.940581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117392 len:8 PRP1 0x0 PRP2 0x0 00:23:41.440 [2024-11-29 12:07:46.940592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.440 [2024-11-29 12:07:46.940707] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa2c770 was disconnected and freed. reset controller. 00:23:41.440 [2024-11-29 12:07:46.940949] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:41.440 [2024-11-29 12:07:46.941041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979450 (9): Bad file descriptor 00:23:41.440 [2024-11-29 12:07:46.941171] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.440 [2024-11-29 12:07:46.941223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.440 [2024-11-29 12:07:46.941264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.440 [2024-11-29 12:07:46.941279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x979450 with addr=10.0.0.2, port=4420 00:23:41.440 [2024-11-29 12:07:46.941289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979450 is same with the state(5) to be set 00:23:41.440 [2024-11-29 12:07:46.941334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979450 (9): Bad file descriptor 00:23:41.440 [2024-11-29 12:07:46.941350] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:41.440 [2024-11-29 12:07:46.941360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:41.440 [2024-11-29 12:07:46.941387] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:41.440 [2024-11-29 12:07:46.941407] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.440 [2024-11-29 12:07:46.941418] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:41.699 12:07:46 -- host/timeout.sh@101 -- # sleep 3 00:23:42.635 [2024-11-29 12:07:47.941616] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.635 [2024-11-29 12:07:47.941738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.635 [2024-11-29 12:07:47.941780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.635 [2024-11-29 12:07:47.941796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x979450 with addr=10.0.0.2, port=4420 00:23:42.635 [2024-11-29 12:07:47.941812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979450 is same with the state(5) to be set 00:23:42.635 [2024-11-29 12:07:47.941840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979450 (9): Bad file descriptor 00:23:42.635 [2024-11-29 12:07:47.941859] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:42.635 [2024-11-29 12:07:47.941869] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:42.635 [2024-11-29 12:07:47.941879] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:42.635 [2024-11-29 12:07:47.941908] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.635 [2024-11-29 12:07:47.941919] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:43.573 [2024-11-29 12:07:48.942084] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.573 [2024-11-29 12:07:48.942195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.573 [2024-11-29 12:07:48.942237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.573 [2024-11-29 12:07:48.942253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x979450 with addr=10.0.0.2, port=4420 00:23:43.573 [2024-11-29 12:07:48.942269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979450 is same with the state(5) to be set 00:23:43.573 [2024-11-29 12:07:48.942309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979450 (9): Bad file descriptor 00:23:43.573 [2024-11-29 12:07:48.942344] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:43.573 [2024-11-29 12:07:48.942353] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:43.573 [2024-11-29 12:07:48.942363] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:43.573 [2024-11-29 12:07:48.942401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:43.573 [2024-11-29 12:07:48.942412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.509 [2024-11-29 12:07:49.944256] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.509 [2024-11-29 12:07:49.944372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.509 [2024-11-29 12:07:49.944414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.509 [2024-11-29 12:07:49.944431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x979450 with addr=10.0.0.2, port=4420 00:23:44.509 [2024-11-29 12:07:49.944445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979450 is same with the state(5) to be set 00:23:44.509 [2024-11-29 12:07:49.944611] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979450 (9): Bad file descriptor 00:23:44.509 [2024-11-29 12:07:49.944753] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.509 [2024-11-29 12:07:49.944764] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.509 [2024-11-29 12:07:49.944774] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.509 [2024-11-29 12:07:49.946815] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.509 [2024-11-29 12:07:49.946841] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.509 12:07:49 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:44.768 [2024-11-29 12:07:50.234094] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.768 12:07:50 -- host/timeout.sh@103 -- # wait 86429 00:23:45.704 [2024-11-29 12:07:50.966336] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:50.972 00:23:50.972 Latency(us) 00:23:50.972 [2024-11-29T12:07:56.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.972 [2024-11-29T12:07:56.483Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:50.972 Verification LBA range: start 0x0 length 0x4000 00:23:50.972 NVMe0n1 : 10.01 7342.29 28.68 6140.71 0.00 9479.33 610.68 3019898.88 00:23:50.972 [2024-11-29T12:07:56.483Z] =================================================================================================================== 00:23:50.972 [2024-11-29T12:07:56.483Z] Total : 7342.29 28.68 6140.71 0.00 9479.33 0.00 3019898.88 00:23:50.972 0 00:23:50.972 12:07:55 -- host/timeout.sh@105 -- # killprocess 86301 00:23:50.972 12:07:55 -- common/autotest_common.sh@936 -- # '[' -z 86301 ']' 00:23:50.972 12:07:55 -- common/autotest_common.sh@940 -- # kill -0 86301 00:23:50.972 12:07:55 -- common/autotest_common.sh@941 -- # uname 00:23:50.972 12:07:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:50.972 12:07:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86301 00:23:50.972 killing process with pid 86301 00:23:50.972 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.972 00:23:50.972 Latency(us) 00:23:50.972 [2024-11-29T12:07:56.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.972 [2024-11-29T12:07:56.483Z] =================================================================================================================== 00:23:50.972 [2024-11-29T12:07:56.483Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.972 12:07:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:50.972 12:07:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:50.972 12:07:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86301' 00:23:50.972 12:07:55 -- common/autotest_common.sh@955 -- # kill 86301 00:23:50.972 12:07:55 -- common/autotest_common.sh@960 -- # wait 86301 00:23:50.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.972 12:07:56 -- host/timeout.sh@110 -- # bdevperf_pid=86543 00:23:50.972 12:07:56 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:50.973 12:07:56 -- host/timeout.sh@112 -- # waitforlisten 86543 /var/tmp/bdevperf.sock 00:23:50.973 12:07:56 -- common/autotest_common.sh@829 -- # '[' -z 86543 ']' 00:23:50.973 12:07:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.973 12:07:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.973 12:07:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.973 12:07:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.973 12:07:56 -- common/autotest_common.sh@10 -- # set +x 00:23:50.973 [2024-11-29 12:07:56.145227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:50.973 [2024-11-29 12:07:56.145336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86543 ] 00:23:50.973 [2024-11-29 12:07:56.288574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.973 [2024-11-29 12:07:56.370938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.907 12:07:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:51.907 12:07:57 -- common/autotest_common.sh@862 -- # return 0 00:23:51.907 12:07:57 -- host/timeout.sh@116 -- # dtrace_pid=86559 00:23:51.907 12:07:57 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86543 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:51.907 12:07:57 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:51.907 12:07:57 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:52.475 NVMe0n1 00:23:52.475 12:07:57 -- host/timeout.sh@124 -- # rpc_pid=86601 00:23:52.475 12:07:57 -- host/timeout.sh@125 -- # sleep 1 00:23:52.475 12:07:57 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:52.475 Running I/O for 10 seconds... 00:23:53.412 12:07:58 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.673 [2024-11-29 12:07:58.972398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.972982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.972994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.673 [2024-11-29 12:07:58.973765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.673 [2024-11-29 12:07:58.973774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.973786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.973795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.973807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.973816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.973827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.973837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.973849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.973858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.973869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.973879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.973892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.973902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.973913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.973923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.973934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.973943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.973955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.973964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.973975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.973985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.973996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.974985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.974996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.975006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.975017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.975026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.975037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.975046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.975058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.975067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.975083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.975093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.975104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.975113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.975125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.975134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.975146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.975155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.975166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.975175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.975186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.975196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.975208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.674 [2024-11-29 12:07:58.975221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.975233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6a9f0 is same with the state(5) to be set 00:23:53.674 [2024-11-29 12:07:58.975246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:53.674 [2024-11-29 12:07:58.975253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:53.674 [2024-11-29 12:07:58.975262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113896 len:8 PRP1 0x0 PRP2 0x0 00:23:53.674 [2024-11-29 12:07:58.975271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.674 [2024-11-29 12:07:58.975331] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c6a9f0 was disconnected and freed. reset controller. 00:23:53.674 [2024-11-29 12:07:58.975635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:53.674 [2024-11-29 12:07:58.975731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6f470 (9): Bad file descriptor 00:23:53.674 [2024-11-29 12:07:58.975842] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.674 [2024-11-29 12:07:58.975917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.674 [2024-11-29 12:07:58.975959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.674 [2024-11-29 12:07:58.975975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6f470 with addr=10.0.0.2, port=4420 00:23:53.674 [2024-11-29 12:07:58.975986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6f470 is same with the state(5) to be set 00:23:53.674 [2024-11-29 12:07:58.976005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6f470 (9): Bad file descriptor 00:23:53.674 [2024-11-29 12:07:58.976023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:53.674 [2024-11-29 12:07:58.976033] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:53.674 [2024-11-29 12:07:58.976043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:53.674 [2024-11-29 12:07:58.976065] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:53.674 [2024-11-29 12:07:58.976078] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:53.674 12:07:58 -- host/timeout.sh@128 -- # wait 86601 00:23:55.573 [2024-11-29 12:08:00.976351] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.573 [2024-11-29 12:08:00.976471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.573 [2024-11-29 12:08:00.976530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.573 [2024-11-29 12:08:00.976549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6f470 with addr=10.0.0.2, port=4420 00:23:55.573 [2024-11-29 12:08:00.976565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6f470 is same with the state(5) to be set 00:23:55.573 [2024-11-29 12:08:00.976597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6f470 (9): Bad file descriptor 00:23:55.573 [2024-11-29 12:08:00.976619] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:55.573 [2024-11-29 12:08:00.976630] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:55.573 [2024-11-29 12:08:00.976642] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:55.573 [2024-11-29 12:08:00.976676] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.573 [2024-11-29 12:08:00.976690] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.474 [2024-11-29 12:08:02.976935] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.474 [2024-11-29 12:08:02.977056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.474 [2024-11-29 12:08:02.977098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.474 [2024-11-29 12:08:02.977114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6f470 with addr=10.0.0.2, port=4420 00:23:57.474 [2024-11-29 12:08:02.977128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6f470 is same with the state(5) to be set 00:23:57.474 [2024-11-29 12:08:02.977159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6f470 (9): Bad file descriptor 00:23:57.474 [2024-11-29 12:08:02.977180] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.474 [2024-11-29 12:08:02.977190] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.474 [2024-11-29 12:08:02.977202] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.474 [2024-11-29 12:08:02.977234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.474 [2024-11-29 12:08:02.977248] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.009 [2024-11-29 12:08:04.977355] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.009 [2024-11-29 12:08:04.977434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.009 [2024-11-29 12:08:04.977446] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.009 [2024-11-29 12:08:04.977458] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:00.009 [2024-11-29 12:08:04.977490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.576 00:24:00.577 Latency(us) 00:24:00.577 [2024-11-29T12:08:06.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.577 [2024-11-29T12:08:06.088Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:00.577 NVMe0n1 : 8.16 2101.26 8.21 15.69 0.00 60412.78 8221.79 7015926.69 00:24:00.577 [2024-11-29T12:08:06.088Z] =================================================================================================================== 00:24:00.577 [2024-11-29T12:08:06.088Z] Total : 2101.26 8.21 15.69 0.00 60412.78 8221.79 7015926.69 00:24:00.577 0 00:24:00.577 12:08:05 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:00.577 Attaching 5 probes... 00:24:00.577 1337.702686: reset bdev controller NVMe0 00:24:00.577 1337.857994: reconnect bdev controller NVMe0 00:24:00.577 3338.241396: reconnect delay bdev controller NVMe0 00:24:00.577 3338.270467: reconnect bdev controller NVMe0 00:24:00.577 5338.825945: reconnect delay bdev controller NVMe0 00:24:00.577 5338.856329: reconnect bdev controller NVMe0 00:24:00.577 7339.383091: reconnect delay bdev controller NVMe0 00:24:00.577 7339.428607: reconnect bdev controller NVMe0 00:24:00.577 12:08:06 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:00.577 12:08:06 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:00.577 12:08:06 -- host/timeout.sh@136 -- # kill 86559 00:24:00.577 12:08:06 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:00.577 12:08:06 -- host/timeout.sh@139 -- # killprocess 86543 00:24:00.577 12:08:06 -- common/autotest_common.sh@936 -- # '[' -z 86543 ']' 00:24:00.577 12:08:06 -- common/autotest_common.sh@940 -- # kill -0 86543 00:24:00.577 12:08:06 -- common/autotest_common.sh@941 -- # uname 00:24:00.577 12:08:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:00.577 12:08:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86543 00:24:00.577 killing process with pid 86543 00:24:00.577 Received shutdown signal, test time was about 8.228808 seconds 00:24:00.577 00:24:00.577 Latency(us) 00:24:00.577 [2024-11-29T12:08:06.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.577 [2024-11-29T12:08:06.088Z] =================================================================================================================== 00:24:00.577 [2024-11-29T12:08:06.088Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.577 12:08:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:00.577 12:08:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:00.577 12:08:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86543' 00:24:00.577 12:08:06 -- common/autotest_common.sh@955 -- # kill 86543 00:24:00.577 12:08:06 -- common/autotest_common.sh@960 -- # wait 86543 00:24:00.835 12:08:06 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.095 12:08:06 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:01.095 12:08:06 -- host/timeout.sh@145 -- # nvmftestfini 00:24:01.095 12:08:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:01.095 12:08:06 -- nvmf/common.sh@116 -- # sync 00:24:01.355 12:08:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:01.355 12:08:06 -- nvmf/common.sh@119 -- # set +e 00:24:01.355 12:08:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:01.355 12:08:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:01.355 rmmod nvme_tcp 00:24:01.355 rmmod nvme_fabrics 00:24:01.355 rmmod nvme_keyring 00:24:01.355 12:08:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:01.355 12:08:06 -- nvmf/common.sh@123 -- # set -e 00:24:01.355 12:08:06 -- nvmf/common.sh@124 -- # return 0 00:24:01.355 12:08:06 -- nvmf/common.sh@477 -- # '[' -n 86100 ']' 00:24:01.355 12:08:06 -- nvmf/common.sh@478 -- # killprocess 86100 00:24:01.355 12:08:06 -- common/autotest_common.sh@936 -- # '[' -z 86100 ']' 00:24:01.355 12:08:06 -- common/autotest_common.sh@940 -- # kill -0 86100 00:24:01.355 12:08:06 -- common/autotest_common.sh@941 -- # uname 00:24:01.355 12:08:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:01.355 12:08:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86100 00:24:01.355 killing process with pid 86100 00:24:01.355 12:08:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:01.355 12:08:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:01.355 12:08:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86100' 00:24:01.355 12:08:06 -- common/autotest_common.sh@955 -- # kill 86100 00:24:01.355 12:08:06 -- common/autotest_common.sh@960 -- # wait 86100 00:24:01.614 12:08:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:01.614 12:08:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:01.614 12:08:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:01.614 12:08:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:01.614 12:08:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:01.614 12:08:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.614 12:08:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.614 12:08:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.614 12:08:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:01.614 ************************************ 00:24:01.614 END TEST nvmf_timeout 00:24:01.614 ************************************ 00:24:01.614 00:24:01.614 real 0m47.942s 00:24:01.614 user 2m20.105s 00:24:01.614 sys 0m6.312s 00:24:01.614 12:08:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:01.614 12:08:07 -- common/autotest_common.sh@10 -- # set +x 00:24:01.874 12:08:07 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:24:01.874 12:08:07 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:24:01.874 12:08:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:01.874 12:08:07 -- common/autotest_common.sh@10 -- # set +x 00:24:01.874 12:08:07 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:01.874 ************************************ 00:24:01.874 END TEST nvmf_tcp 00:24:01.874 ************************************ 00:24:01.874 00:24:01.874 real 10m57.449s 00:24:01.874 user 30m42.112s 00:24:01.874 sys 3m19.670s 00:24:01.874 12:08:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:01.874 12:08:07 -- common/autotest_common.sh@10 -- # set +x 00:24:01.874 12:08:07 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:24:01.874 12:08:07 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:01.874 12:08:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:01.874 12:08:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:01.874 12:08:07 -- common/autotest_common.sh@10 -- # set +x 00:24:01.874 ************************************ 00:24:01.874 START TEST nvmf_dif 00:24:01.874 ************************************ 00:24:01.874 12:08:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:01.874 * Looking for test storage... 00:24:01.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:01.874 12:08:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:01.874 12:08:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:01.874 12:08:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:01.874 12:08:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:01.874 12:08:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:01.874 12:08:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:01.874 12:08:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:01.874 12:08:07 -- scripts/common.sh@335 -- # IFS=.-: 00:24:01.874 12:08:07 -- scripts/common.sh@335 -- # read -ra ver1 00:24:01.874 12:08:07 -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.874 12:08:07 -- scripts/common.sh@336 -- # read -ra ver2 00:24:01.874 12:08:07 -- scripts/common.sh@337 -- # local 'op=<' 00:24:01.874 12:08:07 -- scripts/common.sh@339 -- # ver1_l=2 00:24:01.874 12:08:07 -- scripts/common.sh@340 -- # ver2_l=1 00:24:01.874 12:08:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:01.874 12:08:07 -- scripts/common.sh@343 -- # case "$op" in 00:24:01.874 12:08:07 -- scripts/common.sh@344 -- # : 1 00:24:01.874 12:08:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:01.874 12:08:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.135 12:08:07 -- scripts/common.sh@364 -- # decimal 1 00:24:02.135 12:08:07 -- scripts/common.sh@352 -- # local d=1 00:24:02.135 12:08:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:02.135 12:08:07 -- scripts/common.sh@354 -- # echo 1 00:24:02.135 12:08:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:02.135 12:08:07 -- scripts/common.sh@365 -- # decimal 2 00:24:02.135 12:08:07 -- scripts/common.sh@352 -- # local d=2 00:24:02.135 12:08:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:02.135 12:08:07 -- scripts/common.sh@354 -- # echo 2 00:24:02.135 12:08:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:02.135 12:08:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:02.135 12:08:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:02.135 12:08:07 -- scripts/common.sh@367 -- # return 0 00:24:02.135 12:08:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:02.135 12:08:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:02.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.135 --rc genhtml_branch_coverage=1 00:24:02.135 --rc genhtml_function_coverage=1 00:24:02.135 --rc genhtml_legend=1 00:24:02.135 --rc geninfo_all_blocks=1 00:24:02.135 --rc geninfo_unexecuted_blocks=1 00:24:02.135 00:24:02.135 ' 00:24:02.135 12:08:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:02.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.135 --rc genhtml_branch_coverage=1 00:24:02.135 --rc genhtml_function_coverage=1 00:24:02.135 --rc genhtml_legend=1 00:24:02.135 --rc geninfo_all_blocks=1 00:24:02.135 --rc geninfo_unexecuted_blocks=1 00:24:02.135 00:24:02.135 ' 00:24:02.135 12:08:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:02.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.136 --rc genhtml_branch_coverage=1 00:24:02.136 --rc genhtml_function_coverage=1 00:24:02.136 --rc genhtml_legend=1 00:24:02.136 --rc geninfo_all_blocks=1 00:24:02.136 --rc geninfo_unexecuted_blocks=1 00:24:02.136 00:24:02.136 ' 00:24:02.136 12:08:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:02.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.136 --rc genhtml_branch_coverage=1 00:24:02.136 --rc genhtml_function_coverage=1 00:24:02.136 --rc genhtml_legend=1 00:24:02.136 --rc geninfo_all_blocks=1 00:24:02.136 --rc geninfo_unexecuted_blocks=1 00:24:02.136 00:24:02.136 ' 00:24:02.136 12:08:07 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:02.136 12:08:07 -- nvmf/common.sh@7 -- # uname -s 00:24:02.136 12:08:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.136 12:08:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.136 12:08:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.136 12:08:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.136 12:08:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.136 12:08:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.136 12:08:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.136 12:08:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.136 12:08:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.136 12:08:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.136 12:08:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79493c5c-f53c-4dad-804b-85e045bfadae 00:24:02.136 12:08:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=79493c5c-f53c-4dad-804b-85e045bfadae 00:24:02.136 12:08:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.136 12:08:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.136 12:08:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:02.136 12:08:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:02.136 12:08:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.136 12:08:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.136 12:08:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.136 12:08:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.136 12:08:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.136 12:08:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.136 12:08:07 -- paths/export.sh@5 -- # export PATH 00:24:02.136 12:08:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.136 12:08:07 -- nvmf/common.sh@46 -- # : 0 00:24:02.136 12:08:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:02.136 12:08:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:02.136 12:08:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:02.136 12:08:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.136 12:08:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.136 12:08:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:02.136 12:08:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:02.136 12:08:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:02.136 12:08:07 -- target/dif.sh@15 -- # NULL_META=16 00:24:02.136 12:08:07 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:24:02.136 12:08:07 -- target/dif.sh@15 -- # NULL_SIZE=64 00:24:02.136 12:08:07 -- target/dif.sh@15 -- # NULL_DIF=1 00:24:02.136 12:08:07 -- target/dif.sh@135 -- # nvmftestinit 00:24:02.136 12:08:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:02.136 12:08:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.136 12:08:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:02.136 12:08:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:02.136 12:08:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:02.136 12:08:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.136 12:08:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:02.136 12:08:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.136 12:08:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:02.136 12:08:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:02.136 12:08:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:02.136 12:08:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:02.136 12:08:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:02.136 12:08:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:02.136 12:08:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.136 12:08:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.136 12:08:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:02.136 12:08:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:02.136 12:08:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:02.136 12:08:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:02.136 12:08:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:02.136 12:08:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.136 12:08:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:02.136 12:08:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:02.136 12:08:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:02.136 12:08:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:02.136 12:08:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:02.136 12:08:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:02.136 Cannot find device "nvmf_tgt_br" 00:24:02.136 12:08:07 -- nvmf/common.sh@154 -- # true 00:24:02.136 12:08:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:02.136 Cannot find device "nvmf_tgt_br2" 00:24:02.136 12:08:07 -- nvmf/common.sh@155 -- # true 00:24:02.136 12:08:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:02.136 12:08:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:02.136 Cannot find device "nvmf_tgt_br" 00:24:02.136 12:08:07 -- nvmf/common.sh@157 -- # true 00:24:02.136 12:08:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:02.136 Cannot find device "nvmf_tgt_br2" 00:24:02.136 12:08:07 -- nvmf/common.sh@158 -- # true 00:24:02.136 12:08:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:02.136 12:08:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:02.136 12:08:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:02.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:02.136 12:08:07 -- nvmf/common.sh@161 -- # true 00:24:02.136 12:08:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:02.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:02.136 12:08:07 -- nvmf/common.sh@162 -- # true 00:24:02.136 12:08:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:02.136 12:08:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:02.136 12:08:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:02.136 12:08:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:02.136 12:08:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:02.136 12:08:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:02.136 12:08:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:02.136 12:08:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:02.136 12:08:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:02.136 12:08:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:02.136 12:08:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:02.136 12:08:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:02.136 12:08:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:02.396 12:08:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:02.396 12:08:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:02.396 12:08:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:02.396 12:08:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:02.396 12:08:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:02.396 12:08:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:02.396 12:08:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:02.396 12:08:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:02.396 12:08:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:02.396 12:08:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:02.396 12:08:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:02.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:24:02.396 00:24:02.396 --- 10.0.0.2 ping statistics --- 00:24:02.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.396 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:02.396 12:08:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:02.396 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:02.396 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:24:02.396 00:24:02.396 --- 10.0.0.3 ping statistics --- 00:24:02.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.396 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:24:02.396 12:08:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:02.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:24:02.396 00:24:02.396 --- 10.0.0.1 ping statistics --- 00:24:02.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.396 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:24:02.396 12:08:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.396 12:08:07 -- nvmf/common.sh@421 -- # return 0 00:24:02.396 12:08:07 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:24:02.396 12:08:07 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:02.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:02.656 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:02.656 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:02.656 12:08:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.656 12:08:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:02.656 12:08:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:02.656 12:08:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.656 12:08:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:02.656 12:08:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:02.656 12:08:08 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:24:02.656 12:08:08 -- target/dif.sh@137 -- # nvmfappstart 00:24:02.656 12:08:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:02.656 12:08:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:02.656 12:08:08 -- common/autotest_common.sh@10 -- # set +x 00:24:02.915 12:08:08 -- nvmf/common.sh@469 -- # nvmfpid=87048 00:24:02.915 12:08:08 -- nvmf/common.sh@470 -- # waitforlisten 87048 00:24:02.915 12:08:08 -- common/autotest_common.sh@829 -- # '[' -z 87048 ']' 00:24:02.915 12:08:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.915 12:08:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.915 12:08:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.915 12:08:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:02.915 12:08:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.915 12:08:08 -- common/autotest_common.sh@10 -- # set +x 00:24:02.915 [2024-11-29 12:08:08.222206] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:02.915 [2024-11-29 12:08:08.222330] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.915 [2024-11-29 12:08:08.361318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.175 [2024-11-29 12:08:08.466548] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:03.175 [2024-11-29 12:08:08.466765] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.175 [2024-11-29 12:08:08.466785] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.175 [2024-11-29 12:08:08.466798] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.175 [2024-11-29 12:08:08.466856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.112 12:08:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.112 12:08:09 -- common/autotest_common.sh@862 -- # return 0 00:24:04.112 12:08:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:04.112 12:08:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:04.112 12:08:09 -- common/autotest_common.sh@10 -- # set +x 00:24:04.112 12:08:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.112 12:08:09 -- target/dif.sh@139 -- # create_transport 00:24:04.112 12:08:09 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:24:04.112 12:08:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.112 12:08:09 -- common/autotest_common.sh@10 -- # set +x 00:24:04.112 [2024-11-29 12:08:09.308821] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.112 12:08:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.112 12:08:09 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:24:04.112 12:08:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:04.112 12:08:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:04.112 12:08:09 -- common/autotest_common.sh@10 -- # set +x 00:24:04.112 ************************************ 00:24:04.112 START TEST fio_dif_1_default 00:24:04.112 ************************************ 00:24:04.112 12:08:09 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:24:04.112 12:08:09 -- target/dif.sh@86 -- # create_subsystems 0 00:24:04.112 12:08:09 -- target/dif.sh@28 -- # local sub 00:24:04.112 12:08:09 -- target/dif.sh@30 -- # for sub in "$@" 00:24:04.112 12:08:09 -- target/dif.sh@31 -- # create_subsystem 0 00:24:04.112 12:08:09 -- target/dif.sh@18 -- # local sub_id=0 00:24:04.112 12:08:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:04.112 12:08:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.112 12:08:09 -- common/autotest_common.sh@10 -- # set +x 00:24:04.112 bdev_null0 00:24:04.112 12:08:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.112 12:08:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:04.112 12:08:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.112 12:08:09 -- common/autotest_common.sh@10 -- # set +x 00:24:04.112 12:08:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.112 12:08:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:04.112 12:08:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.112 12:08:09 -- common/autotest_common.sh@10 -- # set +x 00:24:04.112 12:08:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.112 12:08:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:04.112 12:08:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.112 12:08:09 -- common/autotest_common.sh@10 -- # set +x 00:24:04.112 [2024-11-29 12:08:09.352941] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.112 12:08:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.112 12:08:09 -- target/dif.sh@87 -- # fio /dev/fd/62 00:24:04.112 12:08:09 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:24:04.112 12:08:09 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:04.112 12:08:09 -- nvmf/common.sh@520 -- # config=() 00:24:04.112 12:08:09 -- nvmf/common.sh@520 -- # local subsystem config 00:24:04.112 12:08:09 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:04.112 12:08:09 -- target/dif.sh@82 -- # gen_fio_conf 00:24:04.112 12:08:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:04.112 12:08:09 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:04.112 12:08:09 -- target/dif.sh@54 -- # local file 00:24:04.112 12:08:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:04.112 { 00:24:04.112 "params": { 00:24:04.112 "name": "Nvme$subsystem", 00:24:04.112 "trtype": "$TEST_TRANSPORT", 00:24:04.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.112 "adrfam": "ipv4", 00:24:04.113 "trsvcid": "$NVMF_PORT", 00:24:04.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.113 "hdgst": ${hdgst:-false}, 00:24:04.113 "ddgst": ${ddgst:-false} 00:24:04.113 }, 00:24:04.113 "method": "bdev_nvme_attach_controller" 00:24:04.113 } 00:24:04.113 EOF 00:24:04.113 )") 00:24:04.113 12:08:09 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:24:04.113 12:08:09 -- target/dif.sh@56 -- # cat 00:24:04.113 12:08:09 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:04.113 12:08:09 -- common/autotest_common.sh@1328 -- # local sanitizers 00:24:04.113 12:08:09 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:04.113 12:08:09 -- common/autotest_common.sh@1330 -- # shift 00:24:04.113 12:08:09 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:24:04.113 12:08:09 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:24:04.113 12:08:09 -- nvmf/common.sh@542 -- # cat 00:24:04.113 12:08:09 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:04.113 12:08:09 -- common/autotest_common.sh@1334 -- # grep libasan 00:24:04.113 12:08:09 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:24:04.113 12:08:09 -- target/dif.sh@72 -- # (( file = 1 )) 00:24:04.113 12:08:09 -- target/dif.sh@72 -- # (( file <= files )) 00:24:04.113 12:08:09 -- nvmf/common.sh@544 -- # jq . 00:24:04.113 12:08:09 -- nvmf/common.sh@545 -- # IFS=, 00:24:04.113 12:08:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:04.113 "params": { 00:24:04.113 "name": "Nvme0", 00:24:04.113 "trtype": "tcp", 00:24:04.113 "traddr": "10.0.0.2", 00:24:04.113 "adrfam": "ipv4", 00:24:04.113 "trsvcid": "4420", 00:24:04.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:04.113 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:04.113 "hdgst": false, 00:24:04.113 "ddgst": false 00:24:04.113 }, 00:24:04.113 "method": "bdev_nvme_attach_controller" 00:24:04.113 }' 00:24:04.113 12:08:09 -- common/autotest_common.sh@1334 -- # asan_lib= 00:24:04.113 12:08:09 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:24:04.113 12:08:09 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:24:04.113 12:08:09 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:24:04.113 12:08:09 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:04.113 12:08:09 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:24:04.113 12:08:09 -- common/autotest_common.sh@1334 -- # asan_lib= 00:24:04.113 12:08:09 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:24:04.113 12:08:09 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:04.113 12:08:09 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:04.113 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:04.113 fio-3.35 00:24:04.113 Starting 1 thread 00:24:04.705 [2024-11-29 12:08:09.954971] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:24:04.705 [2024-11-29 12:08:09.955042] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:14.679 00:24:14.679 filename0: (groupid=0, jobs=1): err= 0: pid=87119: Fri Nov 29 12:08:20 2024 00:24:14.679 read: IOPS=9129, BW=35.7MiB/s (37.4MB/s)(357MiB/10001msec) 00:24:14.679 slat (nsec): min=5835, max=84842, avg=8322.73, stdev=3623.19 00:24:14.679 clat (usec): min=228, max=3599, avg=412.56, stdev=56.42 00:24:14.679 lat (usec): min=234, max=3611, avg=420.89, stdev=57.07 00:24:14.679 clat percentiles (usec): 00:24:14.679 | 1.00th=[ 338], 5.00th=[ 351], 10.00th=[ 355], 20.00th=[ 367], 00:24:14.679 | 30.00th=[ 379], 40.00th=[ 396], 50.00th=[ 408], 60.00th=[ 424], 00:24:14.679 | 70.00th=[ 437], 80.00th=[ 453], 90.00th=[ 474], 95.00th=[ 490], 00:24:14.679 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 611], 99.95th=[ 709], 00:24:14.679 | 99.99th=[ 2409] 00:24:14.679 bw ( KiB/s): min=33024, max=41088, per=99.61%, avg=36375.58, stdev=2553.41, samples=19 00:24:14.679 iops : min= 8256, max=10272, avg=9093.89, stdev=638.35, samples=19 00:24:14.679 lat (usec) : 250=0.01%, 500=96.65%, 750=3.31%, 1000=0.02% 00:24:14.679 lat (msec) : 2=0.01%, 4=0.01% 00:24:14.679 cpu : usr=82.46%, sys=15.15%, ctx=18, majf=0, minf=8 00:24:14.679 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.679 issued rwts: total=91302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.679 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:14.679 00:24:14.679 Run status group 0 (all jobs): 00:24:14.679 READ: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=357MiB (374MB), run=10001-10001msec 00:24:14.938 12:08:20 -- target/dif.sh@88 -- # destroy_subsystems 0 00:24:14.938 12:08:20 -- target/dif.sh@43 -- # local sub 00:24:14.938 12:08:20 -- target/dif.sh@45 -- # for sub in "$@" 00:24:14.938 12:08:20 -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:14.938 12:08:20 -- target/dif.sh@36 -- # local sub_id=0 00:24:14.938 12:08:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:14.938 12:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.938 12:08:20 -- common/autotest_common.sh@10 -- # set +x 00:24:14.938 12:08:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.938 12:08:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:14.938 12:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.938 12:08:20 -- common/autotest_common.sh@10 -- # set +x 00:24:14.938 12:08:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.938 00:24:14.938 real 0m10.973s 00:24:14.938 user 0m8.864s 00:24:14.938 sys 0m1.793s 00:24:14.938 ************************************ 00:24:14.938 END TEST fio_dif_1_default 00:24:14.938 ************************************ 00:24:14.938 12:08:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:14.938 12:08:20 -- common/autotest_common.sh@10 -- # set +x 00:24:14.938 12:08:20 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:24:14.938 12:08:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:14.938 12:08:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:14.938 12:08:20 -- common/autotest_common.sh@10 -- # set +x 00:24:14.938 ************************************ 00:24:14.938 START TEST fio_dif_1_multi_subsystems 00:24:14.938 ************************************ 00:24:14.938 12:08:20 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:24:14.938 12:08:20 -- target/dif.sh@92 -- # local files=1 00:24:14.938 12:08:20 -- target/dif.sh@94 -- # create_subsystems 0 1 00:24:14.938 12:08:20 -- target/dif.sh@28 -- # local sub 00:24:14.938 12:08:20 -- target/dif.sh@30 -- # for sub in "$@" 00:24:14.938 12:08:20 -- target/dif.sh@31 -- # create_subsystem 0 00:24:14.938 12:08:20 -- target/dif.sh@18 -- # local sub_id=0 00:24:14.939 12:08:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:14.939 12:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.939 12:08:20 -- common/autotest_common.sh@10 -- # set +x 00:24:14.939 bdev_null0 00:24:14.939 12:08:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.939 12:08:20 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:14.939 12:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.939 12:08:20 -- common/autotest_common.sh@10 -- # set +x 00:24:14.939 12:08:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.939 12:08:20 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:14.939 12:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.939 12:08:20 -- common/autotest_common.sh@10 -- # set +x 00:24:14.939 12:08:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.939 12:08:20 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:14.939 12:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.939 12:08:20 -- common/autotest_common.sh@10 -- # set +x 00:24:14.939 [2024-11-29 12:08:20.379603] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.939 12:08:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.939 12:08:20 -- target/dif.sh@30 -- # for sub in "$@" 00:24:14.939 12:08:20 -- target/dif.sh@31 -- # create_subsystem 1 00:24:14.939 12:08:20 -- target/dif.sh@18 -- # local sub_id=1 00:24:14.939 12:08:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:14.939 12:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.939 12:08:20 -- common/autotest_common.sh@10 -- # set +x 00:24:14.939 bdev_null1 00:24:14.939 12:08:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.939 12:08:20 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:14.939 12:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.939 12:08:20 -- common/autotest_common.sh@10 -- # set +x 00:24:14.939 12:08:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.939 12:08:20 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:14.939 12:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.939 12:08:20 -- common/autotest_common.sh@10 -- # set +x 00:24:14.939 12:08:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.939 12:08:20 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.939 12:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.939 12:08:20 -- common/autotest_common.sh@10 -- # set +x 00:24:14.939 12:08:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.939 12:08:20 -- target/dif.sh@95 -- # fio /dev/fd/62 00:24:14.939 12:08:20 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:24:14.939 12:08:20 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:14.939 12:08:20 -- nvmf/common.sh@520 -- # config=() 00:24:14.939 12:08:20 -- nvmf/common.sh@520 -- # local subsystem config 00:24:14.939 12:08:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:14.939 12:08:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:14.939 { 00:24:14.939 "params": { 00:24:14.939 "name": "Nvme$subsystem", 00:24:14.939 "trtype": "$TEST_TRANSPORT", 00:24:14.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.939 "adrfam": "ipv4", 00:24:14.939 "trsvcid": "$NVMF_PORT", 00:24:14.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.939 "hdgst": ${hdgst:-false}, 00:24:14.939 "ddgst": ${ddgst:-false} 00:24:14.939 }, 00:24:14.939 "method": "bdev_nvme_attach_controller" 00:24:14.939 } 00:24:14.939 EOF 00:24:14.939 )") 00:24:14.939 12:08:20 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:14.939 12:08:20 -- target/dif.sh@82 -- # gen_fio_conf 00:24:14.939 12:08:20 -- target/dif.sh@54 -- # local file 00:24:14.939 12:08:20 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:14.939 12:08:20 -- target/dif.sh@56 -- # cat 00:24:14.939 12:08:20 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:24:14.939 12:08:20 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:14.939 12:08:20 -- common/autotest_common.sh@1328 -- # local sanitizers 00:24:14.939 12:08:20 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:14.939 12:08:20 -- nvmf/common.sh@542 -- # cat 00:24:14.939 12:08:20 -- common/autotest_common.sh@1330 -- # shift 00:24:14.939 12:08:20 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:24:14.939 12:08:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.939 12:08:20 -- target/dif.sh@72 -- # (( file = 1 )) 00:24:14.939 12:08:20 -- target/dif.sh@72 -- # (( file <= files )) 00:24:14.939 12:08:20 -- common/autotest_common.sh@1334 -- # grep libasan 00:24:14.939 12:08:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:14.939 12:08:20 -- target/dif.sh@73 -- # cat 00:24:14.939 12:08:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:24:14.939 12:08:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:14.939 12:08:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:14.939 { 00:24:14.939 "params": { 00:24:14.939 "name": "Nvme$subsystem", 00:24:14.939 "trtype": "$TEST_TRANSPORT", 00:24:14.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.939 "adrfam": "ipv4", 00:24:14.939 "trsvcid": "$NVMF_PORT", 00:24:14.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.939 "hdgst": ${hdgst:-false}, 00:24:14.939 "ddgst": ${ddgst:-false} 00:24:14.939 }, 00:24:14.939 "method": "bdev_nvme_attach_controller" 00:24:14.939 } 00:24:14.939 EOF 00:24:14.939 )") 00:24:14.939 12:08:20 -- target/dif.sh@72 -- # (( file++ )) 00:24:14.939 12:08:20 -- target/dif.sh@72 -- # (( file <= files )) 00:24:14.939 12:08:20 -- nvmf/common.sh@542 -- # cat 00:24:14.939 12:08:20 -- nvmf/common.sh@544 -- # jq . 00:24:14.939 12:08:20 -- nvmf/common.sh@545 -- # IFS=, 00:24:14.939 12:08:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:14.939 "params": { 00:24:14.939 "name": "Nvme0", 00:24:14.939 "trtype": "tcp", 00:24:14.939 "traddr": "10.0.0.2", 00:24:14.939 "adrfam": "ipv4", 00:24:14.939 "trsvcid": "4420", 00:24:14.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:14.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:14.939 "hdgst": false, 00:24:14.939 "ddgst": false 00:24:14.939 }, 00:24:14.939 "method": "bdev_nvme_attach_controller" 00:24:14.939 },{ 00:24:14.939 "params": { 00:24:14.939 "name": "Nvme1", 00:24:14.939 "trtype": "tcp", 00:24:14.939 "traddr": "10.0.0.2", 00:24:14.939 "adrfam": "ipv4", 00:24:14.939 "trsvcid": "4420", 00:24:14.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:14.939 "hdgst": false, 00:24:14.939 "ddgst": false 00:24:14.939 }, 00:24:14.939 "method": "bdev_nvme_attach_controller" 00:24:14.939 }' 00:24:15.199 12:08:20 -- common/autotest_common.sh@1334 -- # asan_lib= 00:24:15.199 12:08:20 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:24:15.199 12:08:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:24:15.199 12:08:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.199 12:08:20 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:24:15.199 12:08:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:24:15.199 12:08:20 -- common/autotest_common.sh@1334 -- # asan_lib= 00:24:15.199 12:08:20 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:24:15.199 12:08:20 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:15.199 12:08:20 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:15.199 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:15.199 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:15.199 fio-3.35 00:24:15.199 Starting 2 threads 00:24:15.766 [2024-11-29 12:08:21.114558] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:24:15.766 [2024-11-29 12:08:21.114641] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:25.771 00:24:25.771 filename0: (groupid=0, jobs=1): err= 0: pid=87280: Fri Nov 29 12:08:31 2024 00:24:25.771 read: IOPS=4388, BW=17.1MiB/s (18.0MB/s)(171MiB/10001msec) 00:24:25.771 slat (usec): min=5, max=153, avg=18.47, stdev= 9.30 00:24:25.771 clat (usec): min=425, max=2692, avg=860.12, stdev=67.42 00:24:25.771 lat (usec): min=432, max=2736, avg=878.59, stdev=69.04 00:24:25.771 clat percentiles (usec): 00:24:25.771 | 1.00th=[ 725], 5.00th=[ 758], 10.00th=[ 783], 20.00th=[ 807], 00:24:25.771 | 30.00th=[ 824], 40.00th=[ 840], 50.00th=[ 857], 60.00th=[ 873], 00:24:25.771 | 70.00th=[ 889], 80.00th=[ 914], 90.00th=[ 938], 95.00th=[ 963], 00:24:25.771 | 99.00th=[ 1012], 99.50th=[ 1037], 99.90th=[ 1090], 99.95th=[ 1401], 00:24:25.771 | 99.99th=[ 2474] 00:24:25.771 bw ( KiB/s): min=17120, max=18016, per=50.11%, avg=17590.05, stdev=280.36, samples=19 00:24:25.771 iops : min= 4280, max= 4504, avg=4397.47, stdev=70.05, samples=19 00:24:25.771 lat (usec) : 500=0.01%, 750=3.22%, 1000=95.14% 00:24:25.771 lat (msec) : 2=1.61%, 4=0.02% 00:24:25.771 cpu : usr=90.37%, sys=7.90%, ctx=14, majf=0, minf=0 00:24:25.771 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:25.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.771 issued rwts: total=43888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.771 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:25.771 filename1: (groupid=0, jobs=1): err= 0: pid=87281: Fri Nov 29 12:08:31 2024 00:24:25.771 read: IOPS=4387, BW=17.1MiB/s (18.0MB/s)(171MiB/10001msec) 00:24:25.771 slat (usec): min=6, max=153, avg=20.84, stdev=10.75 00:24:25.771 clat (usec): min=610, max=3700, avg=854.38, stdev=80.68 00:24:25.771 lat (usec): min=628, max=3722, avg=875.22, stdev=83.79 00:24:25.771 clat percentiles (usec): 00:24:25.771 | 1.00th=[ 693], 5.00th=[ 734], 10.00th=[ 758], 20.00th=[ 791], 00:24:25.771 | 30.00th=[ 816], 40.00th=[ 832], 50.00th=[ 848], 60.00th=[ 873], 00:24:25.771 | 70.00th=[ 889], 80.00th=[ 914], 90.00th=[ 955], 95.00th=[ 979], 00:24:25.771 | 99.00th=[ 1037], 99.50th=[ 1057], 99.90th=[ 1106], 99.95th=[ 1319], 00:24:25.771 | 99.99th=[ 2474] 00:24:25.771 bw ( KiB/s): min=17120, max=18016, per=50.10%, avg=17588.21, stdev=278.60, samples=19 00:24:25.771 iops : min= 4280, max= 4504, avg=4397.05, stdev=69.65, samples=19 00:24:25.771 lat (usec) : 750=7.44%, 1000=89.71% 00:24:25.771 lat (msec) : 2=2.83%, 4=0.02% 00:24:25.771 cpu : usr=91.90%, sys=6.62%, ctx=35, majf=0, minf=0 00:24:25.771 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:25.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.771 issued rwts: total=43880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.771 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:25.771 00:24:25.771 Run status group 0 (all jobs): 00:24:25.771 READ: bw=34.3MiB/s (35.9MB/s), 17.1MiB/s-17.1MiB/s (18.0MB/s-18.0MB/s), io=343MiB (359MB), run=10001-10001msec 00:24:26.030 12:08:31 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:24:26.030 12:08:31 -- target/dif.sh@43 -- # local sub 00:24:26.030 12:08:31 -- target/dif.sh@45 -- # for sub in "$@" 00:24:26.030 12:08:31 -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:26.030 12:08:31 -- target/dif.sh@36 -- # local sub_id=0 00:24:26.030 12:08:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:26.030 12:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.030 12:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:26.030 12:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.030 12:08:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:26.030 12:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.030 12:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:26.030 12:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.030 12:08:31 -- target/dif.sh@45 -- # for sub in "$@" 00:24:26.030 12:08:31 -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:26.030 12:08:31 -- target/dif.sh@36 -- # local sub_id=1 00:24:26.030 12:08:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.030 12:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.030 12:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:26.030 12:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.030 12:08:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:26.030 12:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.030 12:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:26.030 12:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.030 00:24:26.030 real 0m11.129s 00:24:26.030 user 0m18.968s 00:24:26.030 sys 0m1.756s 00:24:26.030 12:08:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:26.030 ************************************ 00:24:26.030 END TEST fio_dif_1_multi_subsystems 00:24:26.030 12:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:26.030 ************************************ 00:24:26.030 12:08:31 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:24:26.030 12:08:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:26.030 12:08:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:26.030 12:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:26.030 ************************************ 00:24:26.030 START TEST fio_dif_rand_params 00:24:26.030 ************************************ 00:24:26.030 12:08:31 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:24:26.030 12:08:31 -- target/dif.sh@100 -- # local NULL_DIF 00:24:26.030 12:08:31 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:24:26.030 12:08:31 -- target/dif.sh@103 -- # NULL_DIF=3 00:24:26.030 12:08:31 -- target/dif.sh@103 -- # bs=128k 00:24:26.030 12:08:31 -- target/dif.sh@103 -- # numjobs=3 00:24:26.030 12:08:31 -- target/dif.sh@103 -- # iodepth=3 00:24:26.030 12:08:31 -- target/dif.sh@103 -- # runtime=5 00:24:26.030 12:08:31 -- target/dif.sh@105 -- # create_subsystems 0 00:24:26.030 12:08:31 -- target/dif.sh@28 -- # local sub 00:24:26.030 12:08:31 -- target/dif.sh@30 -- # for sub in "$@" 00:24:26.030 12:08:31 -- target/dif.sh@31 -- # create_subsystem 0 00:24:26.030 12:08:31 -- target/dif.sh@18 -- # local sub_id=0 00:24:26.030 12:08:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:26.030 12:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.030 12:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:26.289 bdev_null0 00:24:26.289 12:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.289 12:08:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:26.289 12:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.289 12:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:26.289 12:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.289 12:08:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:26.289 12:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.289 12:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:26.289 12:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.289 12:08:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:26.289 12:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.289 12:08:31 -- common/autotest_common.sh@10 -- # set +x 00:24:26.289 [2024-11-29 12:08:31.566091] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.289 12:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.289 12:08:31 -- target/dif.sh@106 -- # fio /dev/fd/62 00:24:26.289 12:08:31 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:24:26.289 12:08:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:26.289 12:08:31 -- nvmf/common.sh@520 -- # config=() 00:24:26.289 12:08:31 -- nvmf/common.sh@520 -- # local subsystem config 00:24:26.289 12:08:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:26.289 12:08:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:26.289 12:08:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:26.289 { 00:24:26.289 "params": { 00:24:26.289 "name": "Nvme$subsystem", 00:24:26.289 "trtype": "$TEST_TRANSPORT", 00:24:26.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.289 "adrfam": "ipv4", 00:24:26.289 "trsvcid": "$NVMF_PORT", 00:24:26.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.289 "hdgst": ${hdgst:-false}, 00:24:26.289 "ddgst": ${ddgst:-false} 00:24:26.289 }, 00:24:26.289 "method": "bdev_nvme_attach_controller" 00:24:26.289 } 00:24:26.289 EOF 00:24:26.289 )") 00:24:26.289 12:08:31 -- target/dif.sh@82 -- # gen_fio_conf 00:24:26.290 12:08:31 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:26.290 12:08:31 -- target/dif.sh@54 -- # local file 00:24:26.290 12:08:31 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:24:26.290 12:08:31 -- target/dif.sh@56 -- # cat 00:24:26.290 12:08:31 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:26.290 12:08:31 -- common/autotest_common.sh@1328 -- # local sanitizers 00:24:26.290 12:08:31 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.290 12:08:31 -- common/autotest_common.sh@1330 -- # shift 00:24:26.290 12:08:31 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:24:26.290 12:08:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.290 12:08:31 -- nvmf/common.sh@542 -- # cat 00:24:26.290 12:08:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:24:26.290 12:08:31 -- target/dif.sh@72 -- # (( file <= files )) 00:24:26.290 12:08:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.290 12:08:31 -- nvmf/common.sh@544 -- # jq . 00:24:26.290 12:08:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:24:26.290 12:08:31 -- common/autotest_common.sh@1334 -- # grep libasan 00:24:26.290 12:08:31 -- nvmf/common.sh@545 -- # IFS=, 00:24:26.290 12:08:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:26.290 "params": { 00:24:26.290 "name": "Nvme0", 00:24:26.290 "trtype": "tcp", 00:24:26.290 "traddr": "10.0.0.2", 00:24:26.290 "adrfam": "ipv4", 00:24:26.290 "trsvcid": "4420", 00:24:26.290 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:26.290 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:26.290 "hdgst": false, 00:24:26.290 "ddgst": false 00:24:26.290 }, 00:24:26.290 "method": "bdev_nvme_attach_controller" 00:24:26.290 }' 00:24:26.290 12:08:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:24:26.290 12:08:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:24:26.290 12:08:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.290 12:08:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.290 12:08:31 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:24:26.290 12:08:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:24:26.290 12:08:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:24:26.290 12:08:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:24:26.290 12:08:31 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:26.290 12:08:31 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:26.290 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:26.290 ... 00:24:26.290 fio-3.35 00:24:26.290 Starting 3 threads 00:24:26.858 [2024-11-29 12:08:32.173973] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:24:26.858 [2024-11-29 12:08:32.174043] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:32.128 00:24:32.128 filename0: (groupid=0, jobs=1): err= 0: pid=87443: Fri Nov 29 12:08:37 2024 00:24:32.128 read: IOPS=252, BW=31.5MiB/s (33.1MB/s)(158MiB/5008msec) 00:24:32.128 slat (usec): min=5, max=114, avg=21.07, stdev=11.81 00:24:32.128 clat (usec): min=9839, max=13565, avg=11847.52, stdev=709.46 00:24:32.128 lat (usec): min=9851, max=13588, avg=11868.59, stdev=713.82 00:24:32.128 clat percentiles (usec): 00:24:32.128 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[11076], 20.00th=[11338], 00:24:32.128 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11994], 00:24:32.128 | 70.00th=[12125], 80.00th=[12518], 90.00th=[12780], 95.00th=[12911], 00:24:32.128 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13566], 99.95th=[13566], 00:24:32.128 | 99.99th=[13566] 00:24:32.128 bw ( KiB/s): min=29952, max=34560, per=33.34%, avg=32262.40, stdev=1659.19, samples=10 00:24:32.128 iops : min= 234, max= 270, avg=252.00, stdev=12.96, samples=10 00:24:32.128 lat (msec) : 10=1.66%, 20=98.34% 00:24:32.128 cpu : usr=93.43%, sys=5.71%, ctx=15, majf=0, minf=9 00:24:32.128 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:32.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.128 issued rwts: total=1263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:32.128 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:32.128 filename0: (groupid=0, jobs=1): err= 0: pid=87444: Fri Nov 29 12:08:37 2024 00:24:32.128 read: IOPS=252, BW=31.5MiB/s (33.0MB/s)(158MiB/5011msec) 00:24:32.128 slat (nsec): min=6425, max=85841, avg=21318.10, stdev=12374.63 00:24:32.128 clat (usec): min=9833, max=14241, avg=11851.70, stdev=715.51 00:24:32.128 lat (usec): min=9846, max=14267, avg=11873.02, stdev=720.77 00:24:32.128 clat percentiles (usec): 00:24:32.128 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[11076], 20.00th=[11338], 00:24:32.128 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11994], 00:24:32.128 | 70.00th=[12125], 80.00th=[12518], 90.00th=[12780], 95.00th=[13042], 00:24:32.128 | 99.00th=[13435], 99.50th=[13435], 99.90th=[14222], 99.95th=[14222], 00:24:32.128 | 99.99th=[14222] 00:24:32.128 bw ( KiB/s): min=29952, max=34560, per=33.33%, avg=32256.00, stdev=1659.07, samples=10 00:24:32.128 iops : min= 234, max= 270, avg=252.00, stdev=12.96, samples=10 00:24:32.128 lat (msec) : 10=1.66%, 20=98.34% 00:24:32.128 cpu : usr=93.15%, sys=6.21%, ctx=109, majf=0, minf=9 00:24:32.129 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:32.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.129 issued rwts: total=1263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:32.129 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:32.129 filename0: (groupid=0, jobs=1): err= 0: pid=87445: Fri Nov 29 12:08:37 2024 00:24:32.129 read: IOPS=251, BW=31.5MiB/s (33.0MB/s)(158MiB/5012msec) 00:24:32.129 slat (nsec): min=5903, max=97680, avg=18984.16, stdev=11330.82 00:24:32.129 clat (usec): min=9828, max=16352, avg=11862.46, stdev=740.06 00:24:32.129 lat (usec): min=9840, max=16384, avg=11881.44, stdev=744.78 00:24:32.129 clat percentiles (usec): 00:24:32.129 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11076], 20.00th=[11338], 00:24:32.129 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11994], 00:24:32.129 | 70.00th=[12125], 80.00th=[12518], 90.00th=[12780], 95.00th=[12911], 00:24:32.129 | 99.00th=[13435], 99.50th=[13435], 99.90th=[16319], 99.95th=[16319], 00:24:32.129 | 99.99th=[16319] 00:24:32.129 bw ( KiB/s): min=29952, max=34560, per=33.33%, avg=32256.00, stdev=1659.07, samples=10 00:24:32.129 iops : min= 234, max= 270, avg=252.00, stdev=12.96, samples=10 00:24:32.129 lat (msec) : 10=1.66%, 20=98.34% 00:24:32.129 cpu : usr=93.12%, sys=6.23%, ctx=17, majf=0, minf=8 00:24:32.129 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:32.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.129 issued rwts: total=1263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:32.129 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:32.129 00:24:32.129 Run status group 0 (all jobs): 00:24:32.129 READ: bw=94.5MiB/s (99.1MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.1MB/s), io=474MiB (497MB), run=5008-5012msec 00:24:32.129 12:08:37 -- target/dif.sh@107 -- # destroy_subsystems 0 00:24:32.129 12:08:37 -- target/dif.sh@43 -- # local sub 00:24:32.129 12:08:37 -- target/dif.sh@45 -- # for sub in "$@" 00:24:32.129 12:08:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:32.129 12:08:37 -- target/dif.sh@36 -- # local sub_id=0 00:24:32.129 12:08:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:32.129 12:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.129 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.129 12:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.129 12:08:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:32.129 12:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.129 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.129 12:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.129 12:08:37 -- target/dif.sh@109 -- # NULL_DIF=2 00:24:32.129 12:08:37 -- target/dif.sh@109 -- # bs=4k 00:24:32.129 12:08:37 -- target/dif.sh@109 -- # numjobs=8 00:24:32.129 12:08:37 -- target/dif.sh@109 -- # iodepth=16 00:24:32.129 12:08:37 -- target/dif.sh@109 -- # runtime= 00:24:32.129 12:08:37 -- target/dif.sh@109 -- # files=2 00:24:32.129 12:08:37 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:24:32.129 12:08:37 -- target/dif.sh@28 -- # local sub 00:24:32.129 12:08:37 -- target/dif.sh@30 -- # for sub in "$@" 00:24:32.129 12:08:37 -- target/dif.sh@31 -- # create_subsystem 0 00:24:32.129 12:08:37 -- target/dif.sh@18 -- # local sub_id=0 00:24:32.129 12:08:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:24:32.129 12:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.129 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.129 bdev_null0 00:24:32.129 12:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.129 12:08:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:32.129 12:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.129 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.129 12:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.129 12:08:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:32.129 12:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.129 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.129 12:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.129 12:08:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:32.129 12:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.129 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.129 [2024-11-29 12:08:37.564287] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.129 12:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.129 12:08:37 -- target/dif.sh@30 -- # for sub in "$@" 00:24:32.129 12:08:37 -- target/dif.sh@31 -- # create_subsystem 1 00:24:32.129 12:08:37 -- target/dif.sh@18 -- # local sub_id=1 00:24:32.129 12:08:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:24:32.129 12:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.129 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.129 bdev_null1 00:24:32.129 12:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.129 12:08:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:32.129 12:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.129 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.129 12:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.129 12:08:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:32.129 12:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.129 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.129 12:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.129 12:08:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.129 12:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.129 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.129 12:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.129 12:08:37 -- target/dif.sh@30 -- # for sub in "$@" 00:24:32.129 12:08:37 -- target/dif.sh@31 -- # create_subsystem 2 00:24:32.129 12:08:37 -- target/dif.sh@18 -- # local sub_id=2 00:24:32.129 12:08:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:24:32.129 12:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.129 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.129 bdev_null2 00:24:32.129 12:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.129 12:08:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:24:32.129 12:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.129 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.129 12:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.129 12:08:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:24:32.129 12:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.129 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.129 12:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.129 12:08:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:32.129 12:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.129 12:08:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.388 12:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.389 12:08:37 -- target/dif.sh@112 -- # fio /dev/fd/62 00:24:32.389 12:08:37 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:24:32.389 12:08:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:24:32.389 12:08:37 -- nvmf/common.sh@520 -- # config=() 00:24:32.389 12:08:37 -- nvmf/common.sh@520 -- # local subsystem config 00:24:32.389 12:08:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:32.389 12:08:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:32.389 12:08:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:32.389 { 00:24:32.389 "params": { 00:24:32.389 "name": "Nvme$subsystem", 00:24:32.389 "trtype": "$TEST_TRANSPORT", 00:24:32.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.389 "adrfam": "ipv4", 00:24:32.389 "trsvcid": "$NVMF_PORT", 00:24:32.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.389 "hdgst": ${hdgst:-false}, 00:24:32.389 "ddgst": ${ddgst:-false} 00:24:32.389 }, 00:24:32.389 "method": "bdev_nvme_attach_controller" 00:24:32.389 } 00:24:32.389 EOF 00:24:32.389 )") 00:24:32.389 12:08:37 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:32.389 12:08:37 -- target/dif.sh@82 -- # gen_fio_conf 00:24:32.389 12:08:37 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:24:32.389 12:08:37 -- target/dif.sh@54 -- # local file 00:24:32.389 12:08:37 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:32.389 12:08:37 -- common/autotest_common.sh@1328 -- # local sanitizers 00:24:32.389 12:08:37 -- target/dif.sh@56 -- # cat 00:24:32.389 12:08:37 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:32.389 12:08:37 -- common/autotest_common.sh@1330 -- # shift 00:24:32.389 12:08:37 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:24:32.389 12:08:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:24:32.389 12:08:37 -- nvmf/common.sh@542 -- # cat 00:24:32.389 12:08:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:32.389 12:08:37 -- common/autotest_common.sh@1334 -- # grep libasan 00:24:32.389 12:08:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:24:32.389 12:08:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:24:32.389 12:08:37 -- target/dif.sh@72 -- # (( file <= files )) 00:24:32.389 12:08:37 -- target/dif.sh@73 -- # cat 00:24:32.389 12:08:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:32.389 12:08:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:32.389 { 00:24:32.389 "params": { 00:24:32.389 "name": "Nvme$subsystem", 00:24:32.389 "trtype": "$TEST_TRANSPORT", 00:24:32.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.389 "adrfam": "ipv4", 00:24:32.389 "trsvcid": "$NVMF_PORT", 00:24:32.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.389 "hdgst": ${hdgst:-false}, 00:24:32.389 "ddgst": ${ddgst:-false} 00:24:32.389 }, 00:24:32.389 "method": "bdev_nvme_attach_controller" 00:24:32.389 } 00:24:32.389 EOF 00:24:32.389 )") 00:24:32.389 12:08:37 -- target/dif.sh@72 -- # (( file++ )) 00:24:32.389 12:08:37 -- target/dif.sh@72 -- # (( file <= files )) 00:24:32.389 12:08:37 -- target/dif.sh@73 -- # cat 00:24:32.389 12:08:37 -- nvmf/common.sh@542 -- # cat 00:24:32.389 12:08:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:32.389 12:08:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:32.389 { 00:24:32.389 "params": { 00:24:32.389 "name": "Nvme$subsystem", 00:24:32.389 "trtype": "$TEST_TRANSPORT", 00:24:32.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.389 "adrfam": "ipv4", 00:24:32.389 "trsvcid": "$NVMF_PORT", 00:24:32.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.389 "hdgst": ${hdgst:-false}, 00:24:32.389 "ddgst": ${ddgst:-false} 00:24:32.389 }, 00:24:32.389 "method": "bdev_nvme_attach_controller" 00:24:32.389 } 00:24:32.389 EOF 00:24:32.389 )") 00:24:32.389 12:08:37 -- target/dif.sh@72 -- # (( file++ )) 00:24:32.389 12:08:37 -- target/dif.sh@72 -- # (( file <= files )) 00:24:32.389 12:08:37 -- nvmf/common.sh@542 -- # cat 00:24:32.389 12:08:37 -- nvmf/common.sh@544 -- # jq . 00:24:32.389 12:08:37 -- nvmf/common.sh@545 -- # IFS=, 00:24:32.389 12:08:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:32.389 "params": { 00:24:32.389 "name": "Nvme0", 00:24:32.389 "trtype": "tcp", 00:24:32.389 "traddr": "10.0.0.2", 00:24:32.389 "adrfam": "ipv4", 00:24:32.389 "trsvcid": "4420", 00:24:32.389 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:32.389 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:32.389 "hdgst": false, 00:24:32.389 "ddgst": false 00:24:32.389 }, 00:24:32.389 "method": "bdev_nvme_attach_controller" 00:24:32.389 },{ 00:24:32.389 "params": { 00:24:32.389 "name": "Nvme1", 00:24:32.389 "trtype": "tcp", 00:24:32.389 "traddr": "10.0.0.2", 00:24:32.389 "adrfam": "ipv4", 00:24:32.389 "trsvcid": "4420", 00:24:32.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.389 "hdgst": false, 00:24:32.389 "ddgst": false 00:24:32.389 }, 00:24:32.389 "method": "bdev_nvme_attach_controller" 00:24:32.389 },{ 00:24:32.389 "params": { 00:24:32.389 "name": "Nvme2", 00:24:32.389 "trtype": "tcp", 00:24:32.389 "traddr": "10.0.0.2", 00:24:32.389 "adrfam": "ipv4", 00:24:32.389 "trsvcid": "4420", 00:24:32.389 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:32.389 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:32.389 "hdgst": false, 00:24:32.389 "ddgst": false 00:24:32.389 }, 00:24:32.389 "method": "bdev_nvme_attach_controller" 00:24:32.389 }' 00:24:32.389 12:08:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:24:32.389 12:08:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:24:32.389 12:08:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:24:32.389 12:08:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:32.389 12:08:37 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:24:32.389 12:08:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:24:32.389 12:08:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:24:32.389 12:08:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:24:32.389 12:08:37 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:32.389 12:08:37 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:32.389 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:32.389 ... 00:24:32.389 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:32.389 ... 00:24:32.389 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:32.389 ... 00:24:32.389 fio-3.35 00:24:32.390 Starting 24 threads 00:24:32.957 [2024-11-29 12:08:38.403087] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:24:32.957 [2024-11-29 12:08:38.403152] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:45.164 fio: pid=87546, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.164 [2024-11-29 12:08:48.930017] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1250c00 via correct icresp 00:24:45.164 [2024-11-29 12:08:48.930076] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1250c00 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=36966400, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=55111680, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=38477824, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=23859200, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=58073088, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=59396096, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=56569856, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=61960192, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=41091072, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=42139648, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=4927488, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=66584576, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=53096448, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=13066240, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=66953216, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=55177216, buflen=4096 00:24:45.164 fio: pid=87556, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.164 [2024-11-29 12:08:49.341082] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1250d80 via correct icresp 00:24:45.164 [2024-11-29 12:08:49.341145] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1250d80 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=39387136, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=47931392, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=21307392, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=58040320, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=46010368, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=42749952, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=35491840, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=57552896, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=17387520, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=37646336, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=42897408, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=28008448, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=42741760, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=2555904, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=34897920, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=50384896, buflen=4096 00:24:45.164 [2024-11-29 12:08:49.359998] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1250600 via correct icresp 00:24:45.164 [2024-11-29 12:08:49.360036] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1250600 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=3481600, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=56184832, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=66342912, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=24215552, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=43143168, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=66883584, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=51359744, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=10039296, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=26849280, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=11550720, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=26439680, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=22265856, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=16035840, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=34263040, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=64954368, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=65003520, buflen=4096 00:24:45.164 fio: pid=87559, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.164 [2024-11-29 12:08:49.367014] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1250a80 via correct icresp 00:24:45.164 [2024-11-29 12:08:49.367051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1250a80 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=38592512, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=49111040, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=16941056, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=34516992, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=16191488, buflen=4096 00:24:45.164 fio: pid=87564, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=10518528, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=50724864, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=65597440, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=49045504, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=24788992, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=33144832, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=9060352, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=38088704, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=60559360, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=40730624, buflen=4096 00:24:45.164 fio: io_u error on file Nvme2n1: Input/output error: read offset=9224192, buflen=4096 00:24:45.164 [2024-11-29 12:08:49.380043] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1251500 via correct icresp 00:24:45.164 [2024-11-29 12:08:49.380099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1251500 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=33353728, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=62767104, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=9048064, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=34938880, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=31834112, buflen=4096 00:24:45.164 fio: pid=87551, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=52871168, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=46837760, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=10993664, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=1417216, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=45432832, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=51412992, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=63549440, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=26009600, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=61190144, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=32485376, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=5083136, buflen=4096 00:24:45.164 [2024-11-29 12:08:49.386059] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1251080 via correct icresp 00:24:45.164 [2024-11-29 12:08:49.386267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1251080 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=2277376, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=33816576, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=15503360, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=18022400, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=35905536, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=64307200, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=35549184, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=39657472, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=52142080, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=34988032, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=55037952, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=55644160, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=54173696, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=22335488, buflen=4096 00:24:45.164 fio: pid=87552, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=62726144, buflen=4096 00:24:45.164 fio: io_u error on file Nvme1n1: Input/output error: read offset=58773504, buflen=4096 00:24:45.164 [2024-11-29 12:08:49.389974] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1251e00 via correct icresp 00:24:45.164 [2024-11-29 12:08:49.390016] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1251e00 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=15663104, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=9408512, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=15581184, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=27107328, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=55336960, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=39804928, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=46354432, buflen=4096 00:24:45.164 fio: io_u error on file Nvme0n1: Input/output error: read offset=34320384, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=9441280, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=26243072, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=39927808, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=55951360, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=46231552, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=23080960, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=37081088, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=33689600, buflen=4096 00:24:45.165 [2024-11-29 12:08:49.394332] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1251380 via correct icresp 00:24:45.165 [2024-11-29 12:08:49.394374] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1251200 via correct icresp 00:24:45.165 [2024-11-29 12:08:49.394421] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1251800 via correct icresp 00:24:45.165 fio: pid=87542, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.165 [2024-11-29 12:08:49.394456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1251380 00:24:45.165 [2024-11-29 12:08:49.394516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1251800 00:24:45.165 [2024-11-29 12:08:49.394539] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1251200 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=19333120, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=56950784, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=35749888, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=24522752, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=45363200, buflen=4096 00:24:45.165 fio: pid=87558, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=46178304, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=12845056, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=48640000, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=48943104, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=19562496, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=10452992, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=7839744, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=52715520, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=51306496, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=22798336, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=41910272, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=65564672, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=58109952, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=44072960, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=34648064, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=40460288, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=65613824, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=966656, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=15175680, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=2039808, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=37330944, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=58757120, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=49201152, buflen=4096 00:24:45.165 fio: pid=87541, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.165 fio: pid=87560, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=21958656, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=32235520, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=66932736, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=6856704, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=49119232, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=56102912, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=45244416, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=37945344, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=63819776, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=25673728, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=864256, buflen=4096 00:24:45.165 fio: io_u error on file Nvme0n1: Input/output error: read offset=50180096, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=14749696, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=32546816, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=36941824, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=29216768, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=45797376, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=45506560, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=6017024, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=22528000, buflen=4096 00:24:45.165 [2024-11-29 12:08:49.395631] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1251680 via correct icresp 00:24:45.165 [2024-11-29 12:08:49.395774] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1250f00 via correct icresp 00:24:45.165 [2024-11-29 12:08:49.395798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1251680 00:24:45.165 [2024-11-29 12:08:49.396244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1250f00 00:24:45.165 [2024-11-29 12:08:49.396373] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1251c80 via correct icresp 00:24:45.165 [2024-11-29 12:08:49.396384] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3fca180 via correct icresp 00:24:45.165 [2024-11-29 12:08:49.396376] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1251b00 via correct icresp 00:24:45.165 [2024-11-29 12:08:49.396431] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3fca000 via correct icresp 00:24:45.165 [2024-11-29 12:08:49.396472] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1251980 via correct icresp 00:24:45.165 fio: pid=87561, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.165 fio: pid=87562, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=64974848, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=51191808, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=12398592, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=61280256, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=48734208, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=11575296, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=52445184, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=60080128, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=44896256, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=53538816, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=62349312, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=37318656, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=2191360, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=40357888, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=4063232, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=26181632, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=38490112, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=36057088, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=62701568, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=29208576, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=61161472, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=13312000, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=55738368, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=11722752, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=51109888, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=53731328, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=39395328, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=3325952, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=3260416, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=49025024, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=11665408, buflen=4096 00:24:45.165 fio: io_u error on file Nvme2n1: Input/output error: read offset=10444800, buflen=4096 00:24:45.165 [2024-11-29 12:08:49.397083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1251c80 00:24:45.165 [2024-11-29 12:08:49.396407] nvme_tcp.c:2320:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3fca300 via correct icresp 00:24:45.165 [2024-11-29 12:08:49.397173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3fca180 00:24:45.165 [2024-11-29 12:08:49.397276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1251b00 00:24:45.165 [2024-11-29 12:08:49.397361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3fca000 00:24:45.165 fio: pid=87550, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.165 fio: pid=87544, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.165 [2024-11-29 12:08:49.397578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1251980 00:24:45.165 [2024-11-29 12:08:49.397605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3fca300 00:24:45.165 fio: io_u error on file Nvme1n1: Input/output error: read offset=47452160, buflen=4096 00:24:45.165 fio: io_u error on file Nvme1n1: Input/output error: read offset=34287616, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=831488, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=53870592, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=37478400, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=2457600, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=40656896, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=11014144, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=23298048, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=37404672, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=53178368, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=782336, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=39247872, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=13889536, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=13529088, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=66187264, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=48861184, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=17141760, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=55959552, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=27402240, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=38703104, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=35835904, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=1691648, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=61239296, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=42729472, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=21630976, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=59961344, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=8798208, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=32940032, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=12214272, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=58417152, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=7098368, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=28934144, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=53342208, buflen=4096 00:24:45.166 [2024-11-29 12:08:49.397849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116de00 (9): Bad file descriptor 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=39178240, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=23973888, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=38977536, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=47759360, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=5263360, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=64847872, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=737280, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=12337152, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=36368384, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=43036672, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=52785152, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=4063232, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=63217664, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=3981312, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=36683776, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=37040128, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=48943104, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=40386560, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=24539136, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=50425856, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=36765696, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=58126336, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=41697280, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=16314368, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=13213696, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=10510336, buflen=4096 00:24:45.166 fio: io_u error on file Nvme1n1: Input/output error: read offset=27250688, buflen=4096 00:24:45.166 fio: pid=87555, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.166 fio: pid=87548, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=12914688, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=30134272, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=38567936, buflen=4096 00:24:45.166 [2024-11-29 12:08:49.398034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116d680 (9): Bad file descriptor 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=56950784, buflen=4096 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=28827648, buflen=4096 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=49283072, buflen=4096 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=65605632, buflen=4096 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=58163200, buflen=4096 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=34717696, buflen=4096 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=34066432, buflen=4096 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=49029120, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=51261440, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=23396352, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=57282560, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=21434368, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=10530816, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=2691072, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=15687680, buflen=4096 00:24:45.166 fio: pid=87545, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=49655808, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=23162880, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=65122304, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=3477504, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=5808128, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=32575488, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=36069376, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=32911360, buflen=4096 00:24:45.166 fio: io_u error on file Nvme0n1: Input/output error: read offset=9801728, buflen=4096 00:24:45.166 fio: pid=87557, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=50032640, buflen=4096 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=9048064, buflen=4096 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=25878528, buflen=4096 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=53391360, buflen=4096 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=44511232, buflen=4096 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=14729216, buflen=4096 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=33435648, buflen=4096 00:24:45.166 fio: io_u error on file Nvme2n1: Input/output error: read offset=42016768, buflen=4096 00:24:45.166 [2024-11-29 12:08:49.399432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116d980 (9): Bad file descriptor 00:24:45.166 00:24:45.166 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87541: Fri Nov 29 12:08:49 2024 00:24:45.166 cpu : usr=0.00%, sys=0.00%, ctx=2, majf=0, minf=0 00:24:45.166 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.166 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.166 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.166 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87542: Fri Nov 29 12:08:49 2024 00:24:45.166 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:45.166 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.166 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.166 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.166 filename0: (groupid=0, jobs=1): err= 0: pid=87543: Fri Nov 29 12:08:49 2024 00:24:45.166 read: IOPS=814, BW=3256KiB/s (3334kB/s)(31.8MiB/10006msec) 00:24:45.166 slat (usec): min=5, max=9023, avg=20.03, stdev=188.86 00:24:45.166 clat (usec): min=855, max=51313, avg=19485.08, stdev=6348.80 00:24:45.166 lat (usec): min=865, max=51329, avg=19505.10, stdev=6352.33 00:24:45.166 clat percentiles (usec): 00:24:45.166 | 1.00th=[ 4015], 5.00th=[10945], 10.00th=[11994], 20.00th=[13960], 00:24:45.166 | 30.00th=[15533], 40.00th=[17171], 50.00th=[20055], 60.00th=[21890], 00:24:45.166 | 70.00th=[22938], 80.00th=[23725], 90.00th=[25822], 95.00th=[29754], 00:24:45.166 | 99.00th=[35914], 99.50th=[43779], 99.90th=[46924], 99.95th=[47973], 00:24:45.166 | 99.99th=[51119] 00:24:45.166 bw ( KiB/s): min= 2528, max= 3680, per=16.39%, avg=3228.32, stdev=335.49, samples=19 00:24:45.166 iops : min= 632, max= 920, avg=807.05, stdev=83.87, samples=19 00:24:45.166 lat (usec) : 1000=0.02% 00:24:45.167 lat (msec) : 2=0.17%, 4=0.80%, 10=2.32%, 20=46.10%, 50=50.56% 00:24:45.167 lat (msec) : 100=0.02% 00:24:45.167 cpu : usr=37.58%, sys=1.85%, ctx=1119, majf=0, minf=9 00:24:45.167 IO depths : 1=2.3%, 2=8.1%, 4=23.6%, 8=55.8%, 16=10.3%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=8145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87544: Fri Nov 29 12:08:49 2024 00:24:45.167 cpu : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=0 00:24:45.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87545: Fri Nov 29 12:08:49 2024 00:24:45.167 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:45.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87546: Fri Nov 29 12:08:49 2024 00:24:45.167 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=1 00:24:45.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename0: (groupid=0, jobs=1): err= 0: pid=87547: Fri Nov 29 12:08:49 2024 00:24:45.167 read: IOPS=841, BW=3368KiB/s (3449kB/s)(32.9MiB/10005msec) 00:24:45.167 slat (usec): min=5, max=8027, avg=20.01, stdev=149.87 00:24:45.167 clat (usec): min=593, max=52660, avg=18847.19, stdev=6344.09 00:24:45.167 lat (usec): min=603, max=52688, avg=18867.20, stdev=6345.17 00:24:45.167 clat percentiles (usec): 00:24:45.167 | 1.00th=[ 4555], 5.00th=[ 8848], 10.00th=[11600], 20.00th=[13960], 00:24:45.167 | 30.00th=[15401], 40.00th=[16188], 50.00th=[18482], 60.00th=[20841], 00:24:45.167 | 70.00th=[22414], 80.00th=[23725], 90.00th=[25560], 95.00th=[29230], 00:24:45.167 | 99.00th=[36963], 99.50th=[42730], 99.90th=[46924], 99.95th=[47449], 00:24:45.167 | 99.99th=[52691] 00:24:45.167 bw ( KiB/s): min= 2448, max= 4216, per=16.97%, avg=3343.79, stdev=473.66, samples=19 00:24:45.167 iops : min= 612, max= 1054, avg=835.89, stdev=118.41, samples=19 00:24:45.167 lat (usec) : 750=0.02% 00:24:45.167 lat (msec) : 2=0.27%, 4=0.47%, 10=6.05%, 20=49.28%, 50=43.87% 00:24:45.167 lat (msec) : 100=0.02% 00:24:45.167 cpu : usr=43.78%, sys=2.02%, ctx=1297, majf=0, minf=9 00:24:45.167 IO depths : 1=1.9%, 2=7.4%, 4=23.1%, 8=56.9%, 16=10.8%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=93.9%, 8=0.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=8424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87548: Fri Nov 29 12:08:49 2024 00:24:45.167 cpu : usr=0.00%, sys=0.00%, ctx=3, majf=0, minf=0 00:24:45.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename1: (groupid=0, jobs=1): err= 0: pid=87549: Fri Nov 29 12:08:49 2024 00:24:45.167 read: IOPS=809, BW=3238KiB/s (3316kB/s)(31.7MiB/10015msec) 00:24:45.167 slat (usec): min=3, max=8067, avg=26.51, stdev=253.94 00:24:45.167 clat (usec): min=1306, max=58349, avg=19553.38, stdev=6414.09 00:24:45.167 lat (usec): min=1316, max=58361, avg=19579.89, stdev=6414.65 00:24:45.167 clat percentiles (usec): 00:24:45.167 | 1.00th=[ 5669], 5.00th=[11076], 10.00th=[12256], 20.00th=[14222], 00:24:45.167 | 30.00th=[15533], 40.00th=[16909], 50.00th=[19792], 60.00th=[21627], 00:24:45.167 | 70.00th=[22676], 80.00th=[23725], 90.00th=[26346], 95.00th=[31589], 00:24:45.167 | 99.00th=[37487], 99.50th=[42730], 99.90th=[46400], 99.95th=[47973], 00:24:45.167 | 99.99th=[58459] 00:24:45.167 bw ( KiB/s): min= 2488, max= 3960, per=16.43%, avg=3236.05, stdev=369.32, samples=20 00:24:45.167 iops : min= 622, max= 990, avg=809.00, stdev=92.32, samples=20 00:24:45.167 lat (msec) : 2=0.17%, 4=0.51%, 10=3.19%, 20=46.92%, 50=49.18% 00:24:45.167 lat (msec) : 100=0.02% 00:24:45.167 cpu : usr=38.53%, sys=2.01%, ctx=1151, majf=0, minf=9 00:24:45.167 IO depths : 1=2.2%, 2=7.7%, 4=22.4%, 8=57.4%, 16=10.3%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=8107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87550: Fri Nov 29 12:08:49 2024 00:24:45.167 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:45.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87551: Fri Nov 29 12:08:49 2024 00:24:45.167 cpu : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=0 00:24:45.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87552: Fri Nov 29 12:08:49 2024 00:24:45.167 cpu : usr=0.00%, sys=0.00%, ctx=16, majf=0, minf=0 00:24:45.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename1: (groupid=0, jobs=1): err= 0: pid=87553: Fri Nov 29 12:08:49 2024 00:24:45.167 read: IOPS=826, BW=3307KiB/s (3386kB/s)(32.3MiB/10002msec) 00:24:45.167 slat (usec): min=6, max=10074, avg=26.73, stdev=253.66 00:24:45.167 clat (usec): min=713, max=53164, avg=19126.26, stdev=6273.31 00:24:45.167 lat (usec): min=743, max=53180, avg=19152.99, stdev=6278.09 00:24:45.167 clat percentiles (usec): 00:24:45.167 | 1.00th=[ 6849], 5.00th=[10421], 10.00th=[12256], 20.00th=[14484], 00:24:45.167 | 30.00th=[15533], 40.00th=[16188], 50.00th=[18482], 60.00th=[20841], 00:24:45.167 | 70.00th=[22152], 80.00th=[23462], 90.00th=[26084], 95.00th=[30540], 00:24:45.167 | 99.00th=[38536], 99.50th=[43779], 99.90th=[46400], 99.95th=[48497], 00:24:45.167 | 99.99th=[53216] 00:24:45.167 bw ( KiB/s): min= 2320, max= 3920, per=16.67%, avg=3283.00, stdev=424.02, samples=19 00:24:45.167 iops : min= 580, max= 980, avg=820.74, stdev=106.00, samples=19 00:24:45.167 lat (usec) : 750=0.02% 00:24:45.167 lat (msec) : 2=0.16%, 4=0.02%, 10=4.14%, 20=52.46%, 50=43.15% 00:24:45.167 lat (msec) : 100=0.05% 00:24:45.167 cpu : usr=41.33%, sys=1.75%, ctx=1251, majf=0, minf=9 00:24:45.167 IO depths : 1=2.2%, 2=7.9%, 4=23.5%, 8=56.0%, 16=10.4%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=93.9%, 8=0.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=8268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename1: (groupid=0, jobs=1): err= 0: pid=87554: Fri Nov 29 12:08:49 2024 00:24:45.167 read: IOPS=792, BW=3168KiB/s (3244kB/s)(31.0MiB/10005msec) 00:24:45.167 slat (usec): min=6, max=10022, avg=27.79, stdev=301.37 00:24:45.167 clat (usec): min=887, max=51640, avg=19985.69, stdev=6494.84 00:24:45.167 lat (usec): min=897, max=51650, avg=20013.48, stdev=6496.16 00:24:45.167 clat percentiles (usec): 00:24:45.167 | 1.00th=[ 6194], 5.00th=[11076], 10.00th=[11994], 20.00th=[13566], 00:24:45.167 | 30.00th=[15401], 40.00th=[18744], 50.00th=[20841], 60.00th=[22414], 00:24:45.167 | 70.00th=[23200], 80.00th=[23987], 90.00th=[26870], 95.00th=[31065], 00:24:45.167 | 99.00th=[36963], 99.50th=[44827], 99.90th=[47973], 99.95th=[51119], 00:24:45.167 | 99.99th=[51643] 00:24:45.167 bw ( KiB/s): min= 2384, max= 3664, per=16.02%, avg=3155.32, stdev=362.20, samples=19 00:24:45.167 iops : min= 596, max= 916, avg=788.79, stdev=90.54, samples=19 00:24:45.167 lat (usec) : 1000=0.03% 00:24:45.167 lat (msec) : 2=0.05%, 4=0.35%, 10=2.57%, 20=41.09%, 50=55.86% 00:24:45.167 lat (msec) : 100=0.05% 00:24:45.167 cpu : usr=33.65%, sys=1.60%, ctx=1036, majf=0, minf=9 00:24:45.167 IO depths : 1=2.4%, 2=8.2%, 4=23.5%, 8=55.9%, 16=10.0%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=93.9%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=7924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87555: Fri Nov 29 12:08:49 2024 00:24:45.167 cpu : usr=0.00%, sys=0.00%, ctx=4, majf=0, minf=0 00:24:45.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87556: Fri Nov 29 12:08:49 2024 00:24:45.167 cpu : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=0 00:24:45.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87557: Fri Nov 29 12:08:49 2024 00:24:45.167 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:45.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87558: Fri Nov 29 12:08:49 2024 00:24:45.167 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:45.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87559: Fri Nov 29 12:08:49 2024 00:24:45.167 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:45.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.167 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87560: Fri Nov 29 12:08:49 2024 00:24:45.167 cpu : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=0 00:24:45.167 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.167 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.168 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87561: Fri Nov 29 12:08:49 2024 00:24:45.168 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:45.168 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.168 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.168 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.168 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87562: Fri Nov 29 12:08:49 2024 00:24:45.168 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:45.168 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.168 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.168 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.168 filename2: (groupid=0, jobs=1): err= 0: pid=87563: Fri Nov 29 12:08:49 2024 00:24:45.168 read: IOPS=845, BW=3381KiB/s (3462kB/s)(33.0MiB/10003msec) 00:24:45.168 slat (usec): min=6, max=8074, avg=25.12, stdev=231.62 00:24:45.168 clat (usec): min=806, max=48919, avg=18752.22, stdev=6588.43 00:24:45.168 lat (usec): min=819, max=49186, avg=18777.34, stdev=6595.88 00:24:45.168 clat percentiles (usec): 00:24:45.168 | 1.00th=[ 3294], 5.00th=[ 8586], 10.00th=[11207], 20.00th=[13960], 00:24:45.168 | 30.00th=[15401], 40.00th=[16188], 50.00th=[17957], 60.00th=[20579], 00:24:45.168 | 70.00th=[22152], 80.00th=[23725], 90.00th=[25822], 95.00th=[30540], 00:24:45.168 | 99.00th=[38536], 99.50th=[42206], 99.90th=[44827], 99.95th=[45876], 00:24:45.168 | 99.99th=[49021] 00:24:45.168 bw ( KiB/s): min= 2512, max= 4104, per=17.07%, avg=3363.58, stdev=443.52, samples=19 00:24:45.168 iops : min= 628, max= 1026, avg=840.84, stdev=110.86, samples=19 00:24:45.168 lat (usec) : 1000=0.05% 00:24:45.168 lat (msec) : 2=0.33%, 4=0.72%, 10=6.01%, 20=50.72%, 50=42.18% 00:24:45.168 cpu : usr=39.31%, sys=1.99%, ctx=1646, majf=0, minf=9 00:24:45.168 IO depths : 1=2.1%, 2=7.3%, 4=21.8%, 8=58.0%, 16=10.9%, 32=0.0%, >=64=0.0% 00:24:45.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.168 complete : 0=0.0%, 4=93.5%, 8=1.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.168 issued rwts: total=8455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.168 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87564: Fri Nov 29 12:08:49 2024 00:24:45.168 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:45.168 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:45.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.168 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.168 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:45.168 00:24:45.168 Run status group 0 (all jobs): 00:24:45.168 READ: bw=19.2MiB/s (20.2MB/s), 3168KiB/s-3381KiB/s (3244kB/s-3462kB/s), io=193MiB (202MB), run=10002-10015msec 00:24:45.168 12:08:49 -- common/autotest_common.sh@1341 -- # trap - ERR 00:24:45.168 12:08:49 -- common/autotest_common.sh@1341 -- # print_backtrace 00:24:45.168 12:08:49 -- common/autotest_common.sh@1142 -- # [[ ehxBET =~ e ]] 00:24:45.168 12:08:49 -- common/autotest_common.sh@1144 -- # args=('/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' '/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/dev/fd/62' 'fio_dif_rand_params' 'fio_dif_rand_params' '--iso' '--transport=tcp') 00:24:45.168 12:08:49 -- common/autotest_common.sh@1144 -- # local args 00:24:45.168 12:08:49 -- common/autotest_common.sh@1146 -- # xtrace_disable 00:24:45.168 12:08:49 -- common/autotest_common.sh@10 -- # set +x 00:24:45.168 ========== Backtrace start: ========== 00:24:45.168 00:24:45.168 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1341 -> fio_plugin(["/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev"],["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:24:45.168 ... 00:24:45.168 1336 break 00:24:45.168 1337 fi 00:24:45.168 1338 done 00:24:45.168 1339 00:24:45.168 1340 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:24:45.168 1341 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:24:45.168 1342 } 00:24:45.168 1343 00:24:45.168 1344 function fio_bdev() { 00:24:45.168 1345 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:24:45.168 1346 } 00:24:45.168 ... 00:24:45.168 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1345 -> fio_bdev(["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:24:45.168 ... 00:24:45.168 1340 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:24:45.168 1341 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:24:45.168 1342 } 00:24:45.168 1343 00:24:45.168 1344 function fio_bdev() { 00:24:45.168 1345 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:24:45.168 1346 } 00:24:45.168 1347 00:24:45.168 1348 function fio_nvme() { 00:24:45.168 1349 fio_plugin "$rootdir/build/fio/spdk_nvme" "$@" 00:24:45.168 1350 } 00:24:45.168 ... 00:24:45.168 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:82 -> fio(["/dev/fd/62"]) 00:24:45.168 ... 00:24:45.168 77 FIO 00:24:45.168 78 done 00:24:45.168 79 } 00:24:45.168 80 00:24:45.168 81 fio() { 00:24:45.168 => 82 fio_bdev --ioengine=spdk_bdev --spdk_json_conf "$@" <(gen_fio_conf) 00:24:45.168 83 } 00:24:45.168 84 00:24:45.168 85 fio_dif_1() { 00:24:45.168 86 create_subsystems 0 00:24:45.168 87 fio <(create_json_sub_conf 0) 00:24:45.168 ... 00:24:45.168 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:112 -> fio_dif_rand_params([]) 00:24:45.168 ... 00:24:45.168 107 destroy_subsystems 0 00:24:45.168 108 00:24:45.168 109 NULL_DIF=2 bs=4k numjobs=8 iodepth=16 runtime="" files=2 00:24:45.168 110 00:24:45.168 111 create_subsystems 0 1 2 00:24:45.168 => 112 fio <(create_json_sub_conf 0 1 2) 00:24:45.168 113 destroy_subsystems 0 1 2 00:24:45.168 114 00:24:45.168 115 NULL_DIF=1 bs=8k,16k,128k numjobs=2 iodepth=8 runtime=5 files=1 00:24:45.168 116 00:24:45.168 117 create_subsystems 0 1 00:24:45.168 ... 00:24:45.168 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1114 -> run_test(["fio_dif_rand_params"],["fio_dif_rand_params"]) 00:24:45.168 ... 00:24:45.168 1109 timing_enter $test_name 00:24:45.168 1110 echo "************************************" 00:24:45.168 1111 echo "START TEST $test_name" 00:24:45.168 1112 echo "************************************" 00:24:45.168 1113 xtrace_restore 00:24:45.168 1114 time "$@" 00:24:45.168 1115 xtrace_disable 00:24:45.168 1116 echo "************************************" 00:24:45.168 1117 echo "END TEST $test_name" 00:24:45.168 1118 echo "************************************" 00:24:45.168 1119 timing_exit $test_name 00:24:45.168 ... 00:24:45.168 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:143 -> main(["--transport=tcp"],["--iso"]) 00:24:45.168 ... 00:24:45.168 138 00:24:45.168 139 create_transport 00:24:45.168 140 00:24:45.168 141 run_test "fio_dif_1_default" fio_dif_1 00:24:45.168 142 run_test "fio_dif_1_multi_subsystems" fio_dif_1_multi_subsystems 00:24:45.168 => 143 run_test "fio_dif_rand_params" fio_dif_rand_params 00:24:45.168 144 run_test "fio_dif_digest" fio_dif_digest 00:24:45.168 145 00:24:45.168 146 trap - SIGINT SIGTERM EXIT 00:24:45.168 147 nvmftestfini 00:24:45.168 ... 00:24:45.168 00:24:45.168 ========== Backtrace end ========== 00:24:45.168 12:08:49 -- common/autotest_common.sh@1183 -- # return 0 00:24:45.168 00:24:45.168 real 0m18.133s 00:24:45.168 user 1m55.909s 00:24:45.168 sys 0m2.782s 00:24:45.168 12:08:49 -- common/autotest_common.sh@1 -- # process_shm --id 0 00:24:45.168 12:08:49 -- common/autotest_common.sh@806 -- # type=--id 00:24:45.168 12:08:49 -- common/autotest_common.sh@807 -- # id=0 00:24:45.168 12:08:49 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:45.168 12:08:49 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:45.168 12:08:49 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:45.168 12:08:49 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:45.168 12:08:49 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:45.168 12:08:49 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:45.168 nvmf_trace.0 00:24:45.168 12:08:49 -- common/autotest_common.sh@821 -- # return 0 00:24:45.168 12:08:49 -- common/autotest_common.sh@1 -- # nvmftestfini 00:24:45.168 12:08:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:45.168 12:08:49 -- nvmf/common.sh@116 -- # sync 00:24:45.168 12:08:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:45.168 12:08:49 -- nvmf/common.sh@119 -- # set +e 00:24:45.168 12:08:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:45.168 12:08:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:45.168 rmmod nvme_tcp 00:24:45.168 rmmod nvme_fabrics 00:24:45.168 rmmod nvme_keyring 00:24:45.168 12:08:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:45.168 12:08:49 -- nvmf/common.sh@123 -- # set -e 00:24:45.168 12:08:49 -- nvmf/common.sh@124 -- # return 0 00:24:45.168 12:08:49 -- nvmf/common.sh@477 -- # '[' -n 87048 ']' 00:24:45.168 12:08:49 -- nvmf/common.sh@478 -- # killprocess 87048 00:24:45.168 12:08:49 -- common/autotest_common.sh@936 -- # '[' -z 87048 ']' 00:24:45.168 12:08:49 -- common/autotest_common.sh@940 -- # kill -0 87048 00:24:45.168 12:08:49 -- common/autotest_common.sh@941 -- # uname 00:24:45.168 12:08:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:45.168 12:08:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87048 00:24:45.168 killing process with pid 87048 00:24:45.168 12:08:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:45.168 12:08:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:45.168 12:08:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87048' 00:24:45.168 12:08:49 -- common/autotest_common.sh@955 -- # kill 87048 00:24:45.168 12:08:49 -- common/autotest_common.sh@960 -- # wait 87048 00:24:45.168 12:08:50 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:24:45.168 12:08:50 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:45.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:45.168 Waiting for block devices as requested 00:24:45.168 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:24:45.168 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:24:45.168 12:08:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:45.168 12:08:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:45.168 12:08:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:45.168 12:08:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:45.168 12:08:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.168 12:08:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:45.168 12:08:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.168 12:08:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:45.168 12:08:50 -- common/autotest_common.sh@1114 -- # trap - ERR 00:24:45.168 12:08:50 -- common/autotest_common.sh@1114 -- # print_backtrace 00:24:45.168 12:08:50 -- common/autotest_common.sh@1142 -- # [[ ehxBET =~ e ]] 00:24:45.168 12:08:50 -- common/autotest_common.sh@1144 -- # args=('/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh' 'nvmf_dif' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:24:45.168 12:08:50 -- common/autotest_common.sh@1144 -- # local args 00:24:45.169 12:08:50 -- common/autotest_common.sh@1146 -- # xtrace_disable 00:24:45.169 12:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:45.169 ========== Backtrace start: ========== 00:24:45.169 00:24:45.426 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1114 -> run_test(["nvmf_dif"],["/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh"]) 00:24:45.426 ... 00:24:45.426 1109 timing_enter $test_name 00:24:45.426 1110 echo "************************************" 00:24:45.426 1111 echo "START TEST $test_name" 00:24:45.426 1112 echo "************************************" 00:24:45.426 1113 xtrace_restore 00:24:45.426 1114 time "$@" 00:24:45.426 1115 xtrace_disable 00:24:45.426 1116 echo "************************************" 00:24:45.426 1117 echo "END TEST $test_name" 00:24:45.426 1118 echo "************************************" 00:24:45.426 1119 timing_exit $test_name 00:24:45.426 ... 00:24:45.426 in /home/vagrant/spdk_repo/spdk/autotest.sh:287 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:24:45.426 ... 00:24:45.426 282 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:45.426 283 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:24:45.426 284 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:45.426 285 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:45.426 286 fi 00:24:45.426 => 287 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:24:45.426 288 run_test "nvmf_abort_qd_sizes" $rootdir/test/nvmf/target/abort_qd_sizes.sh 00:24:45.426 289 elif [ "$SPDK_TEST_NVMF_TRANSPORT" = "fc" ]; then 00:24:45.426 290 run_test "nvmf_fc" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:45.426 291 run_test "spdkcli_nvmf_fc" $rootdir/test/spdkcli/nvmf.sh 00:24:45.426 292 else 00:24:45.426 ... 00:24:45.426 00:24:45.426 ========== Backtrace end ========== 00:24:45.426 12:08:50 -- common/autotest_common.sh@1183 -- # return 0 00:24:45.426 00:24:45.426 real 0m43.463s 00:24:45.426 user 2m56.045s 00:24:45.426 sys 0m10.908s 00:24:45.426 12:08:50 -- common/autotest_common.sh@1 -- # autotest_cleanup 00:24:45.426 12:08:50 -- common/autotest_common.sh@1381 -- # local autotest_es=18 00:24:45.426 12:08:50 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:24:45.426 12:08:50 -- common/autotest_common.sh@10 -- # set +x 00:24:57.636 INFO: APP EXITING 00:24:57.636 INFO: killing all VMs 00:24:57.636 INFO: killing vhost app 00:24:57.636 INFO: EXIT DONE 00:24:57.636 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:57.636 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:24:57.636 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:24:58.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:58.573 Cleaning 00:24:58.573 Removing: /var/run/dpdk/spdk0/config 00:24:58.573 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:58.573 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:58.573 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:58.573 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:58.573 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:58.573 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:58.573 Removing: /var/run/dpdk/spdk1/config 00:24:58.573 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:58.573 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:58.573 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:58.573 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:58.573 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:58.573 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:58.573 Removing: /var/run/dpdk/spdk2/config 00:24:58.573 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:58.573 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:58.573 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:58.573 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:58.573 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:58.573 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:58.573 Removing: /var/run/dpdk/spdk3/config 00:24:58.573 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:58.573 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:58.573 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:58.573 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:58.573 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:58.573 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:58.573 Removing: /var/run/dpdk/spdk4/config 00:24:58.573 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:58.573 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:58.573 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:58.573 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:58.573 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:58.573 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:58.573 Removing: /dev/shm/nvmf_trace.0 00:24:58.573 Removing: /dev/shm/spdk_tgt_trace.pid65573 00:24:58.573 Removing: /var/run/dpdk/spdk0 00:24:58.573 Removing: /var/run/dpdk/spdk1 00:24:58.573 Removing: /var/run/dpdk/spdk2 00:24:58.573 Removing: /var/run/dpdk/spdk3 00:24:58.573 Removing: /var/run/dpdk/spdk4 00:24:58.573 Removing: /var/run/dpdk/spdk_pid65414 00:24:58.573 Removing: /var/run/dpdk/spdk_pid65573 00:24:58.573 Removing: /var/run/dpdk/spdk_pid65831 00:24:58.573 Removing: /var/run/dpdk/spdk_pid66022 00:24:58.573 Removing: /var/run/dpdk/spdk_pid66188 00:24:58.573 Removing: /var/run/dpdk/spdk_pid66266 00:24:58.573 Removing: /var/run/dpdk/spdk_pid66349 00:24:58.573 Removing: /var/run/dpdk/spdk_pid66447 00:24:58.573 Removing: /var/run/dpdk/spdk_pid66537 00:24:58.573 Removing: /var/run/dpdk/spdk_pid66575 00:24:58.573 Removing: /var/run/dpdk/spdk_pid66605 00:24:58.573 Removing: /var/run/dpdk/spdk_pid66679 00:24:58.573 Removing: /var/run/dpdk/spdk_pid66773 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67225 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67277 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67328 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67344 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67424 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67445 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67520 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67536 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67576 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67594 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67640 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67662 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67798 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67828 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67914 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67969 00:24:58.573 Removing: /var/run/dpdk/spdk_pid67993 00:24:58.573 Removing: /var/run/dpdk/spdk_pid68057 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68077 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68111 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68131 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68166 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68186 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68226 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68240 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68280 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68299 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68334 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68359 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68388 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68413 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68447 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68467 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68507 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68521 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68561 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68575 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68615 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68637 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68671 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68691 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68731 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68750 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68785 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68804 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68839 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68864 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68893 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68918 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68957 00:24:58.832 Removing: /var/run/dpdk/spdk_pid68975 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69018 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69035 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69078 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69092 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69131 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69146 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69182 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69259 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69351 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69689 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69701 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69737 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69750 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69769 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69798 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69805 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69824 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69853 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69866 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69879 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69908 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69921 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69940 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69962 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69976 00:24:58.832 Removing: /var/run/dpdk/spdk_pid69995 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70013 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70031 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70050 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70085 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70103 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70131 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70206 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70232 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70242 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70276 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70291 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70304 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70339 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70356 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70388 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70400 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70403 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70416 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70429 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70437 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70444 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70457 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70488 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70516 00:24:58.832 Removing: /var/run/dpdk/spdk_pid70525 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70559 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70569 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70582 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70622 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70635 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70667 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70680 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70687 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70695 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70708 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70721 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70723 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70736 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70817 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70870 00:24:59.091 Removing: /var/run/dpdk/spdk_pid70997 00:24:59.091 Removing: /var/run/dpdk/spdk_pid71030 00:24:59.091 Removing: /var/run/dpdk/spdk_pid71074 00:24:59.091 Removing: /var/run/dpdk/spdk_pid71094 00:24:59.091 Removing: /var/run/dpdk/spdk_pid71114 00:24:59.091 Removing: /var/run/dpdk/spdk_pid71134 00:24:59.091 Removing: /var/run/dpdk/spdk_pid71169 00:24:59.091 Removing: /var/run/dpdk/spdk_pid71183 00:24:59.091 Removing: /var/run/dpdk/spdk_pid71259 00:24:59.091 Removing: /var/run/dpdk/spdk_pid71279 00:24:59.091 Removing: /var/run/dpdk/spdk_pid71333 00:24:59.091 Removing: /var/run/dpdk/spdk_pid71436 00:24:59.091 Removing: /var/run/dpdk/spdk_pid71493 00:24:59.092 Removing: /var/run/dpdk/spdk_pid71527 00:24:59.092 Removing: /var/run/dpdk/spdk_pid71626 00:24:59.092 Removing: /var/run/dpdk/spdk_pid71672 00:24:59.092 Removing: /var/run/dpdk/spdk_pid71709 00:24:59.092 Removing: /var/run/dpdk/spdk_pid71938 00:24:59.092 Removing: /var/run/dpdk/spdk_pid72030 00:24:59.092 Removing: /var/run/dpdk/spdk_pid72056 00:24:59.092 Removing: /var/run/dpdk/spdk_pid72389 00:24:59.092 Removing: /var/run/dpdk/spdk_pid72427 00:24:59.092 Removing: /var/run/dpdk/spdk_pid72742 00:24:59.092 Removing: /var/run/dpdk/spdk_pid73167 00:24:59.092 Removing: /var/run/dpdk/spdk_pid73444 00:24:59.092 Removing: /var/run/dpdk/spdk_pid74245 00:24:59.092 Removing: /var/run/dpdk/spdk_pid75088 00:24:59.092 Removing: /var/run/dpdk/spdk_pid75211 00:24:59.092 Removing: /var/run/dpdk/spdk_pid75281 00:24:59.092 Removing: /var/run/dpdk/spdk_pid76574 00:24:59.092 Removing: /var/run/dpdk/spdk_pid76799 00:24:59.092 Removing: /var/run/dpdk/spdk_pid77130 00:24:59.092 Removing: /var/run/dpdk/spdk_pid77240 00:24:59.092 Removing: /var/run/dpdk/spdk_pid77373 00:24:59.092 Removing: /var/run/dpdk/spdk_pid77401 00:24:59.092 Removing: /var/run/dpdk/spdk_pid77431 00:24:59.092 Removing: /var/run/dpdk/spdk_pid77456 00:24:59.092 Removing: /var/run/dpdk/spdk_pid77559 00:24:59.092 Removing: /var/run/dpdk/spdk_pid77693 00:24:59.092 Removing: /var/run/dpdk/spdk_pid77854 00:24:59.092 Removing: /var/run/dpdk/spdk_pid77935 00:24:59.092 Removing: /var/run/dpdk/spdk_pid78329 00:24:59.092 Removing: /var/run/dpdk/spdk_pid78684 00:24:59.092 Removing: /var/run/dpdk/spdk_pid78687 00:24:59.092 Removing: /var/run/dpdk/spdk_pid80907 00:24:59.092 Removing: /var/run/dpdk/spdk_pid80913 00:24:59.092 Removing: /var/run/dpdk/spdk_pid81198 00:24:59.092 Removing: /var/run/dpdk/spdk_pid81215 00:24:59.092 Removing: /var/run/dpdk/spdk_pid81229 00:24:59.092 Removing: /var/run/dpdk/spdk_pid81266 00:24:59.092 Removing: /var/run/dpdk/spdk_pid81271 00:24:59.092 Removing: /var/run/dpdk/spdk_pid81360 00:24:59.092 Removing: /var/run/dpdk/spdk_pid81363 00:24:59.092 Removing: /var/run/dpdk/spdk_pid81471 00:24:59.092 Removing: /var/run/dpdk/spdk_pid81473 00:24:59.092 Removing: /var/run/dpdk/spdk_pid81581 00:24:59.092 Removing: /var/run/dpdk/spdk_pid81588 00:24:59.092 Removing: /var/run/dpdk/spdk_pid82009 00:24:59.092 Removing: /var/run/dpdk/spdk_pid82052 00:24:59.092 Removing: /var/run/dpdk/spdk_pid82162 00:24:59.092 Removing: /var/run/dpdk/spdk_pid82248 00:24:59.092 Removing: /var/run/dpdk/spdk_pid82576 00:24:59.092 Removing: /var/run/dpdk/spdk_pid82773 00:24:59.092 Removing: /var/run/dpdk/spdk_pid83162 00:24:59.092 Removing: /var/run/dpdk/spdk_pid83704 00:24:59.092 Removing: /var/run/dpdk/spdk_pid84155 00:24:59.092 Removing: /var/run/dpdk/spdk_pid84215 00:24:59.092 Removing: /var/run/dpdk/spdk_pid84277 00:24:59.092 Removing: /var/run/dpdk/spdk_pid84328 00:24:59.092 Removing: /var/run/dpdk/spdk_pid84451 00:24:59.351 Removing: /var/run/dpdk/spdk_pid84513 00:24:59.351 Removing: /var/run/dpdk/spdk_pid84574 00:24:59.351 Removing: /var/run/dpdk/spdk_pid84634 00:24:59.351 Removing: /var/run/dpdk/spdk_pid84966 00:24:59.351 Removing: /var/run/dpdk/spdk_pid86149 00:24:59.351 Removing: /var/run/dpdk/spdk_pid86301 00:24:59.351 Removing: /var/run/dpdk/spdk_pid86543 00:24:59.351 Removing: /var/run/dpdk/spdk_pid87105 00:24:59.351 Removing: /var/run/dpdk/spdk_pid87265 00:24:59.351 Removing: /var/run/dpdk/spdk_pid87433 00:24:59.351 Removing: /var/run/dpdk/spdk_pid87531 00:24:59.351 Clean 00:25:05.912 killing process with pid 59753 00:25:05.912 killing process with pid 59756 00:25:05.912 12:09:10 -- common/autotest_common.sh@1446 -- # return 18 00:25:05.912 12:09:10 -- common/autotest_common.sh@1 -- # : 00:25:05.912 12:09:10 -- common/autotest_common.sh@1 -- # exit 1 00:25:05.924 [Pipeline] } 00:25:05.948 [Pipeline] // timeout 00:25:05.957 [Pipeline] } 00:25:05.977 [Pipeline] // stage 00:25:05.985 [Pipeline] } 00:25:05.990 ERROR: script returned exit code 1 00:25:05.990 Setting overall build result to FAILURE 00:25:06.008 [Pipeline] // catchError 00:25:06.018 [Pipeline] stage 00:25:06.021 [Pipeline] { (Stop VM) 00:25:06.036 [Pipeline] sh 00:25:06.315 + vagrant halt 00:25:09.605 ==> default: Halting domain... 00:25:14.894 [Pipeline] sh 00:25:15.174 + vagrant destroy -f 00:25:18.461 ==> default: Removing domain... 00:25:18.474 [Pipeline] sh 00:25:18.756 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:25:18.766 [Pipeline] } 00:25:18.781 [Pipeline] // stage 00:25:18.786 [Pipeline] } 00:25:18.800 [Pipeline] // dir 00:25:18.806 [Pipeline] } 00:25:18.821 [Pipeline] // wrap 00:25:18.828 [Pipeline] } 00:25:18.840 [Pipeline] // catchError 00:25:18.850 [Pipeline] stage 00:25:18.853 [Pipeline] { (Epilogue) 00:25:18.866 [Pipeline] sh 00:25:19.149 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:20.539 [Pipeline] catchError 00:25:20.541 [Pipeline] { 00:25:20.555 [Pipeline] sh 00:25:20.837 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:21.097 Artifacts sizes are good 00:25:21.107 [Pipeline] } 00:25:21.123 [Pipeline] // catchError 00:25:21.135 [Pipeline] archiveArtifacts 00:25:21.142 Archiving artifacts 00:25:21.270 [Pipeline] cleanWs 00:25:21.286 [WS-CLEANUP] Deleting project workspace... 00:25:21.286 [WS-CLEANUP] Deferred wipeout is used... 00:25:21.293 [WS-CLEANUP] done 00:25:21.294 [Pipeline] } 00:25:21.310 [Pipeline] // stage 00:25:21.315 [Pipeline] } 00:25:21.329 [Pipeline] // node 00:25:21.334 [Pipeline] End of Pipeline 00:25:21.375 Finished: FAILURE